Science.gov

Sample records for general event-driven simulator

  1. Event-driven simulation in SELMON: An overview of EDSE

    NASA Technical Reports Server (NTRS)

    Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.

    1992-01-01

    EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.

  2. Event-driven simulation of cerebellar granule cells.

    PubMed

    Carrillo, Richard R; Ros, Eduardo; Tolu, Silvia; Nieus, Thierry; D'Angelo, Egidio

    2008-01-01

    Around half of the neurons of a human brain are granule cells (approximately 10(11)granule neurons) [Kandel, E.R., Schwartz, J.H., Jessell, T.M., 2000. Principles of Neural Science. McGraw-Hill Professional Publishing, New York]. In order to study in detail the functional role of the intrinsic features of this cell we have developed a pre-compiled behavioural model based on the simplified granule-cell model of Bezzi et al. [Bezzi, M., Nieus, T., Arleo, A., D'Angelo, E., Coenen, O.J.-M.D., 2004. Information transfer at the mossy fiber-granule cell synapse of the cerebellum. 34th Annual Meeting. Society for Neuroscience, San Diego, CA, USA]. We can use an efficient event-driven simulation scheme based on lookup tables (EDLUT) [Ros, E., Carrillo, R.R., Ortigosa, E.M., Barbour, B., Ags, R., 2006. Event-driven simulation scheme for spiking neural networks using lookup tables to characterize neuronal dynamics. Neural Computation 18 (12), 2959-2993]. For this purpose it is necessary to compile into tables the data obtained through a massive numerical calculation of the simplified cell model. This allows network simulations requiring minimal numerical calculation. There are three major features that are considered functionally relevant in the simplified granule cell model: bursting, subthreshold oscillations and resonance. In this work we describe how the cell model is compiled into tables keeping these key properties of the neuron model.

  3. High-level simulation of JWST event-driven operations

    NASA Astrophysics Data System (ADS)

    Henry, R.; Kinzel, W.

    2012-09-01

    The James Webb Space Telescope (JWST) has an event-driven architecture: an onboard Observation Plan Executive (OPE) executes an Observation Plan (OP) consisting of a sequence of observing units (visits). During normal operations, ground action to update the OP is only expected to be necessary about once a week. This architecture is designed to tolerate uncertainty in visit duration, and occasional visit failures due to inability to acquire guide stars, without creating gaps in the observing timeline. The operations concept is complicated by the need for occasional scheduling of timecritical science and engineering visits that cannot tolerate much slippage without inducing gaps, and also by onboard momentum management. A prototype Python tool called the JWST Observation Plan Execution Simulator (JOPES) has recently been developed to simulate OP execution at a high level and analyze the response of the Observatory and OPE to both nominal and contingency scenarios. Incorporating both deterministic and stochastic behavior, JOPES has potential to be a powerful tool for several purposes: requirements analysis, system verification, systems engineering studies, and test data generation. It has already been successfully applied to a study of overhead estimation bias: whether to use conservative or average-case estimates for timing components that are inherently uncertain, such as those involving guide-star acquisition. JOPES is being enhanced to support interfaces to the operational Proposal Planning Subsystem (PPS) now being developed, with the objective of "closing the loop" between testing and simulation by feeding simulated event logs back into the PPS.

  4. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  5. An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm

    SciTech Connect

    Donev, A; Garcia, A L; Alder, B J

    2007-07-30

    A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.

  6. NEVESIM: event-driven neural simulation framework with a Python interface.

    PubMed

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.

  7. NEVESIM: event-driven neural simulation framework with a Python interface

    PubMed Central

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291

  8. Slip velocity and stresses in granular Poiseuille flow via event-driven simulation.

    PubMed

    Chikkadi, Vijayakumar; Alam, Meheboob

    2009-08-01

    Event-driven simulations of inelastic smooth hard disks are used to probe the slip velocity and rheology in gravity-driven granular Poiseuille flow. It is shown that both the slip velocity (U(w)) and its gradient (dU(w)/dy) depend crucially on the mean density, wall roughness, and inelastic dissipation. While the gradient of slip velocity follows a single power-law relation with Knudsen number, the variation in U(w) with Kn shows three distinct regimes in terms of Knudsen number. An interesting possibility of Knudsen-number-dependent specularity coefficient emerges from a comparison of our results with a first-order transport theory for the slip velocity. Simulation results on stresses are compared with kinetic-theory predictions, with reasonable agreement of our data in the quasielastic limit. The deviation of simulations from theory increases with increasing dissipation which is tied to the increasing magnitude of the first normal stress difference (N(1)) that shows interesting nonmonotonic behavior with density. As in simple shear flow, there is a sign change of N(1) at some critical density and its collisional component and the related collisional anisotropy are responsible for this sign reversal.

  9. Efficient event-driven simulations shed new light on microtubule organization in the plant cortical array

    NASA Astrophysics Data System (ADS)

    Tindemans, Simon H.; Deinum, Eva E.; Lindeboom, Jelmer J.; Mulder, Bela M.

    2014-04-01

    The dynamics of the plant microtubule cytoskeleton is a paradigmatic example of the complex spatiotemporal processes characterising life at the cellular scale. This system is composed of large numbers of spatially extended particles, each endowed with its own intrinsic stochastic dynamics, and is capable of non-equilibrium self-organisation through collisional interactions of these particles. To elucidate the behaviour of such a complex system requires not only conceptual advances, but also the development of appropriate computational tools to simulate it. As the number of parameters involved is large and the behaviour is stochastic, it is essential that these simulations be fast enough to allow for an exploration of the phase space and the gathering of sufficient statistics to accurately pin down the average behaviour as well as the magnitude of fluctuations around it. Here we describe a simulation approach that meets this requirement by adopting an event-driven methodology that encompasses both the spontaneous stochastic changes in microtubule state as well as the deterministic collisions. In contrast with finite time step simulations this technique is intrinsically exact, as well as several orders of magnitude faster, which enables ordinary PC hardware to simulate systems of ˜ 10^3 microtubules on a time scale ˜ 10^{3} faster than real time. In addition we present new tools for the analysis of microtubule trajectories on curved surfaces. We illustrate the use of these methods by addressing a number of outstanding issues regarding the importance of various parameters on the transition from an isotropic to an aligned and oriented state.

  10. Comments on event driven animation

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.

    1987-01-01

    Event driven animation provides a general method of describing controlling values for various computer animation techniques. A definition and comments are provided on genralizing motion description with events. Additional comments are also provided about the implementation of twixt.

  11. A combined Event-Driven/Time-Driven molecular dynamics algorithm for the simulation of shock waves in rarefied gases

    SciTech Connect

    Valentini, Paolo Schwartzentruber, Thomas E.

    2009-12-10

    A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between {rho}=10{sup -4}kg/m{sup 3} and {rho}=10{sup -1}kg/m{sup 3}, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.

  12. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-02-28

    We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  13. Anticipating the Chaotic Behaviour of Industrial Systems Based on Stochastic, Event-Driven Simulations

    NASA Astrophysics Data System (ADS)

    Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra

    2004-08-01

    In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.

  14. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-08-30

    We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  15. Event-driven simulation of the state institution activity for the service provision based on business processes

    NASA Astrophysics Data System (ADS)

    Kataev, M. Yu.; Loseva, N. V.; Mitsel, A. A.; Bulysheva, L. A.; Kozlov, S. V.

    2017-01-01

    The paper presents an approach, based on business processes, assessment and control of the state of the state institution, the social insurance Fund. The paper describes the application of business processes, such as items with clear measurable parameters that need to be determined, controlled and changed for management. The example of one of the business processes of the state institutions, which shows the ability to solve management tasks, is given. The authors of the paper demonstrate the possibility of applying the mathematical apparatus of imitative simulation for solving management tasks.

  16. Event-Driven Process Chains (EPC)

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    This chapter provides a comprehensive overview of Event-driven Process Chains (EPCs) and introduces a novel definition of EPC semantics. EPCs became popular in the 1990s as a conceptual business process modeling language in the context of reference modeling. Reference modeling refers to the documentation of generic business operations in a model such as service processes in the telecommunications sector, for example. It is claimed that reference models can be reused and adapted as best-practice recommendations in individual companies (see [230, 168, 229, 131, 400, 401, 446, 127, 362, 126]). The roots of reference modeling can be traced back to the Kölner Integrationsmodell (KIM) [146, 147] that was developed in the 1960s and 1970s. In the 1990s, the Institute of Information Systems (IWi) in Saarbrücken worked on a project with SAP to define a suitable business process modeling language to document the processes of the SAP R/3 enterprise resource planning system. There were two results from this joint effort: the definition of EPCs [210] and the documentation of the SAP system in the SAP Reference Model (see [92, 211]). The extensive database of this reference model contains almost 10,000 sub-models: 604 of them non-trivial EPC business process models. The SAP Reference model had a huge impact with several researchers referring to it in their publications (see [473, 235, 127, 362, 281, 427, 415]) as well as motivating the creation of EPC reference models in further domains including computer integrated manufacturing [377, 379], logistics [229] or retail [52]. The wide-spread application of EPCs in business process modeling theory and practice is supported by their coverage in seminal text books for business process management and information systems in general (see [378, 380, 49, 384, 167, 240]). EPCs are frequently used in practice due to a high user acceptance [376] and extensive tool support. Some examples of tools that support EPCs are ARIS Toolset by IDS

  17. Optimal switching policy for performance enhancement of distributed parameter systems based on event-driven control

    NASA Astrophysics Data System (ADS)

    Mu, Wen-Ying; Cui, Bao-Tong; Lou, Xu-Yang; Li, Wen

    2014-07-01

    This paper aims to improve the performance of a class of distributed parameter systems for the optimal switching of actuators and controllers based on event-driven control. It is assumed that in the available multiple actuators, only one actuator can receive the control signal and be activated over an unfixed time interval, and the other actuators keep dormant. After incorporating a state observer into the event generator, the event-driven control loop and the minimum inter-event time are ultimately bounded. Based on the event-driven state feedback control, the time intervals of unfixed length can be obtained. The optimal switching policy is based on finite horizon linear quadratic optimal control at the beginning of each time subinterval. A simulation example demonstrate the effectiveness of the proposed policy.

  18. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  19. On Mixed Data and Event Driven Design for Adaptive-Critic-Based Nonlinear $H∞ Control.

    PubMed

    Wang, Ding; Mu, Chaoxu; Liu, Derong; Ma, Hongwen

    2017-02-01

    In this paper, based on the adaptive critic learning technique, the H∞ control for a class of unknown nonlinear dynamic systems is investigated by adopting a mixed data and event driven design approach. The nonlinear H∞ control problem is formulated as a two-player zero-sum differential game and the adaptive critic method is employed to cope with the data-based optimization. The novelty lies in that the data driven learning identifier is combined with the event driven design formulation, in order to develop the adaptive critic controller, thereby accomplishing the nonlinear H∞ control. The event driven optimal control law and the time driven worst case disturbance law are approximated by constructing and tuning a critic neural network. Applying the event driven feedback control, the closed-loop system is built with stability analysis. Simulation studies are conducted to verify the theoretical results and illustrate the control performance. It is significant to observe that the present research provides a new avenue of integrating data-based control and event-triggering mechanism into establishing advanced adaptive critic systems.

  20. Asynchronous networks and event driven dynamics

    NASA Astrophysics Data System (ADS)

    Bick, Christian; Field, Michael

    2017-02-01

    Real-world networks in technology, engineering and biology often exhibit dynamics that cannot be adequately reproduced using network models given by smooth dynamical systems and a fixed network topology. Asynchronous networks give a theoretical and conceptual framework for the study of network dynamics where nodes can evolve independently of one another, be constrained, stop, and later restart, and where the interaction between different components of the network may depend on time, state, and stochastic effects. This framework is sufficiently general to encompass a wide range of applications ranging from engineering to neuroscience. Typically, dynamics is piecewise smooth and there are relationships with Filippov systems. In this paper, we give examples of asynchronous networks, and describe the basic formalism and structure. In the following companion paper, we make the notion of a functional asynchronous network rigorous, discuss the phenomenon of dynamical locks, and present a foundational result on the spatiotemporal factorization of the dynamics for a large class of functional asynchronous networks.

  1. Feasibility study for a generalized gate logic software simulator

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.

    1983-01-01

    Unit-delay simulation, event driven simulation, zero-delay simulation, simulation techniques, 2-valued versus multivalued logic, network initialization, gate operations and alternate network representations, parallel versus serial mode simulation fault modelling, extension of multiprocessor systems, and simulation timing are discussed. Functional level networks, gate equivalent circuits, the prototype BDX-930 network model, fault models, identifying detected faults for BGLOSS are discussed. Preprocessor tasks, postprocessor tasks, executive tasks, and a library of bliss coded macros for GGLOSS are also discussed.

  2. Two-ball problem revisited: Limitations of event-driven modeling

    NASA Astrophysics Data System (ADS)

    Müller, Patric; Pöschel, Thorsten

    2011-04-01

    The main precondition of simulating systems of hard particles by means of event-driven modeling is the assumption of instantaneous collisions. The aim of this paper is to quantify the deviation of event-driven modeling from the solution of Newton’s equation of motion using a paradigmatic example: If a tennis ball is held above a basketball with their centers vertically aligned, and the balls are released to collide with the floor, the tennis ball may rebound at a surprisingly high speed. We show in this article that the simple textbook explanation of this effect is an oversimplification, even for the limit of perfectly elastic particles. Instead, there may occur a rather complex scenario including multiple collisions which may lead to a very different final velocity as compared with the velocity resulting from the oversimplified model.

  3. Event management for large scale event-driven digital hardware spiking neural networks.

    PubMed

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms.

  4. Automatic Distribution Network Reconfiguration: An Event-Driven Approach

    SciTech Connect

    Ding, Fei; Jiang, Huaiguang; Tan, Jin

    2016-11-14

    This paper proposes an event-driven approach for reconfiguring distribution systems automatically. Specifically, an optimal synchrophasor sensor placement (OSSP) is used to reduce the number of synchrophasor sensors while keeping the whole system observable. Then, a wavelet-based event detection and location approach is used to detect and locate the event, which performs as a trigger for network reconfiguration. With the detected information, the system is then reconfigured using the hierarchical decentralized approach to seek for the new optimal topology. In this manner, whenever an event happens the distribution network can be reconfigured automatically based on the real-time information that is observable and detectable.

  5. Multirate and event-driven Kalman filters for helicopter flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Smith, Phillip; Suorsa, Raymond E.; Hussien, Bassam

    1993-01-01

    A vision-based obstacle detection system that provides information about objects as a function of azimuth and elevation is discussed. The range map is computed using a sequence of images from a passive sensor, and an extended Kalman filter is used to estimate range to obstacles. The magnitude of the optical flow that provides measurements for each Kalman filter varies significantly over the image depending on the helicopter motion and object location. In a standard Kalman filter, the measurement update takes place at fixed intervals. It may be necessary to use a different measurement update rate in different parts of the image in order to maintain the same signal to noise ratio in the optical flow calculations. A range estimation scheme that accepts the measurement only under certain conditions is presented. The estimation results from the standard Kalman filter are compared with results from a multirate Kalman filter and an event-driven Kalman filter for a sequence of helicopter flight images.

  6. Intelligent fuzzy controller for event-driven real time systems

    NASA Technical Reports Server (NTRS)

    Grantner, Janos; Patyra, Marek; Stachowicz, Marian S.

    1992-01-01

    Most of the known linguistic models are essentially static, that is, time is not a parameter in describing the behavior of the object's model. In this paper we show a model for synchronous finite state machines based on fuzzy logic. Such finite state machines can be used to build both event-driven, time-varying, rule-based systems and the control unit section of a fuzzy logic computer. The architecture of a pipelined intelligent fuzzy controller is presented, and the linguistic model is represented by an overall fuzzy relation stored in a single rule memory. A VLSI integrated circuit implementation of the fuzzy controller is suggested. At a clock rate of 30 MHz, the controller can perform 3 MFLIPS on multi-dimensional fuzzy data.

  7. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors

    PubMed Central

    Nie, Kaiming; Wang, Xinlei; Qiao, Jun; Xu, Jiangtao

    2016-01-01

    This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD) image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM). The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs) are used to quantize the time of photons’ arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor’s resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip’s output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5–20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes. PMID:26828490

  8. Stochastic Optimal Regulation of Nonlinear Networked Control Systems by Using Event-Driven Adaptive Dynamic Programming.

    PubMed

    Sahoo, Avimanyu; Jagannathan, Sarangapani

    2017-02-01

    In this paper, an event-driven stochastic adaptive dynamic programming (ADP)-based technique is introduced for nonlinear systems with a communication network within its feedback loop. A near optimal control policy is designed using an actor-critic framework and ADP with event sampled state vector. First, the system dynamics are approximated by using a novel neural network (NN) identifier with event sampled state vector. The optimal control policy is generated via an actor NN by using the NN identifier and value function approximated by a critic NN through ADP. The stochastic NN identifier, actor, and critic NN weights are tuned at the event sampled instants leading to aperiodic weight tuning laws. Above all, an adaptive event sampling condition based on estimated NN weights is designed by using the Lyapunov technique to ensure ultimate boundedness of all the closed-loop signals along with the approximation accuracy. The net result is event-driven stochastic ADP technique that can significantly reduce the computation and network transmissions. Finally, the analytical design is substantiated with simulation results.

  9. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable

  10. Event-driven contrastive divergence for spiking neuromorphic systems

    PubMed Central

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2014-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality. PMID:24574952

  11. Event Driven Messaging with Role-Based Subscriptions

    NASA Technical Reports Server (NTRS)

    Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed

    2009-01-01

    Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).

  12. Event-driven contrastive divergence for spiking neuromorphic systems.

    PubMed

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2013-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  13. General Purpose Heat Source Simulator

    NASA Technical Reports Server (NTRS)

    Emrich, William J., Jr.

    2008-01-01

    The General Purpose Heat Source (GPHS) project seeks to combine the development of an electrically heated, single GPHS module simulator with the evaluation of potential nuclear surface power systems. The simulator is designed to match the form, fit, and function of actual GPHS modules which normally generate heat through the radioactive decay of Pu238. The use of electrically heated modules rather than modules containing Pu238 facilitates the testing of the subsystems and systems without sacrificing the quantity and quality of the test data gathered. Current GPHS activities are centered on developing robust heater designs with sizes and weights which closely match those of actual Pu238 fueled GPHS blocks. Designs are being pursued which will allow operation up to 1100 C.

  14. Multiagent Attitude Control System for Satellites Based in Momentum Wheels and Event-Driven Synchronization

    NASA Astrophysics Data System (ADS)

    Garcia, Juan L.; Moreno, Jose Sanchez

    2012-12-01

    Attitude control is a requirement always present in spacecraft design. Several kinds of actuators exist to accomplish this control, being momentum wheels one of the most employed. Usually satellites carry redundant momentum wheels to handle any possible single failure, but the controller remains as a single centralized element, posing problems in case of failures. In this work a decentralized agent-based event-driven algorithm for attitude control is presented as a possible solution. Several agents based in momentum wheels will interact among them to accomplish the satellite control. A simulation environment has been developed to analyze the behavior of this architecture. This environment has been made available through the web page http://www.dia.uned.es.

  15. A Hybrid Adaptive Routing Algorithm for Event-Driven Wireless Sensor Networks

    PubMed Central

    Figueiredo, Carlos M. S.; Nakamura, Eduardo F.; Loureiro, Antonio A. F.

    2009-01-01

    Routing is a basic function in wireless sensor networks (WSNs). For these networks, routing algorithms depend on the characteristics of the applications and, consequently, there is no self-contained algorithm suitable for every case. In some scenarios, the network behavior (traffic load) may vary a lot, such as an event-driven application, favoring different algorithms at different instants. This work presents a hybrid and adaptive algorithm for routing in WSNs, called Multi-MAF, that adapts its behavior autonomously in response to the variation of network conditions. In particular, the proposed algorithm applies both reactive and proactive strategies for routing infrastructure creation, and uses an event-detection estimation model to change between the strategies and save energy. To show the advantages of the proposed approach, it is evaluated through simulations. Comparisons with independent reactive and proactive algorithms show improvements on energy consumption. PMID:22423207

  16. General Purpose Heat Source Simulator

    NASA Technical Reports Server (NTRS)

    Emrich, Bill

    2008-01-01

    The General Purpose Heat Source (GPHS) simulator project is designed to replicate through the use of electrical heaters, the form, fit, and function of actual GPHS modules which generate heat through the radioactive decay of Pu238. The use of electrically heated modules rather than modules containing Pu238 facilitates the testing of spacecraft subsystems and systems without sacrificing the quantity and quality of the test data gathered. Previous GPHS activities are centered around developing robust heater designs with sizes and weights that closely matched those of actual Pu238 fueled GPHS blocks. These efforts were successful, although their maximum temperature capabilities were limited to around 850 C. New designs are being pursued which also replicate the sizes and weights of actual Pu238 fueled GPHS blocks but will allow operation up to 1100 C.

  17. Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets.

    PubMed

    Pérez-Carrasco, José Antonio; Zhao, Bo; Serrano, Carmen; Acha, Begoña; Serrano-Gotarredona, Teresa; Chen, Shouchun; Linares-Barranco, Bernabé

    2013-11-01

    Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.

  18. Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate-Coding and Coincidence Processing. Application to Feed Forward ConvNets.

    PubMed

    Perez-Carrasco, J A; Zhao, B; Serrano, C; Acha, B; Serrano-Gotarredona, T; Chen, S; Linares-Barranco, B

    2013-04-10

    Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at “frame rate”. Event-driven vision sensors take inspiration from biology. A special type of Event-driven sensor is the so called Dynamic-Vision-Sensor (DVS) where each pixel computes relative changes of light, or “temporal contrast”. Pixel events become available with micro second delays with respect to “reality”. These events can be processed “as they flow” by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper we present a methodology for mapping from a properly trained neural network in a conventional Frame-driven representation, to an Event-driven representation. The method is illustrated by studying Event-driven Convolutional Neural Networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The Event-driven ConvNet is fed with recordings obtained from a real DVS camera. The Event-driven ConvNet is simulated with a dedicated Event-driven simulator, and consists of a number of Event-driven processing modules the characteristics of which are obtained from individually manufactured hardware modules.

  19. Event-driven management algorithm of an Engineering documents circulation system

    NASA Astrophysics Data System (ADS)

    Kuzenkov, V.; Zebzeev, A.; Gromakov, E.

    2015-04-01

    Development methodology of an engineering documents circulation system in the design company is reviewed. Discrete event-driven automatic models using description algorithms of project management is offered. Petri net use for dynamic design of projects is offered.

  20. Exact event-driven implementation for recurrent networks of stochastic perfect integrate-and-fire neurons.

    PubMed

    Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo

    2012-12-01

    In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

  1. Event-driven model predictive control of sewage pumping stations for sulfide mitigation in sewer networks.

    PubMed

    Liu, Yiqi; Ganigué, Ramon; Sharma, Keshab; Yuan, Zhiguo

    2016-07-01

    Chemicals such as Mg(OH)2 and iron salts are widely dosed to sewage for mitigating sulfide-induced corrosion and odour problems in sewer networks. The chemical dosing rate is usually not automatically controlled but profiled based on experience of operators, often resulting in over- or under-dosing. Even though on-line control algorithms for chemical dosing in single pipes have been developed recently, network-wide control algorithms are currently not available. The key challenge is that a sewer network is typically wide-spread comprising many interconnected sewer pipes and pumping stations, making network-wide sulfide mitigation with a relatively limited number of dosing points challenging. In this paper, we propose and demonstrate an Event-driven Model Predictive Control (EMPC) methodology, which controls the flows of sewage streams containing the dosed chemical to ensure desirable distribution of the dosed chemical throughout the pipe sections of interests. First of all, a network-state model is proposed to predict the chemical concentration in a network. An EMPC algorithm is then designed to coordinate sewage pumping station operations to ensure desirable chemical distribution in the network. The performance of the proposed control methodology is demonstrated by applying the designed algorithm to a real sewer network simulated with the well-established SeweX model using real sewage flow and characteristics data. The EMPC strategy significantly improved the sulfide mitigation performance with the same chemical consumption, compared to the current practice.

  2. Field Evaluation of a General Purpose Simulator.

    ERIC Educational Resources Information Center

    Spangenberg, Ronald W.

    The use of a general purpose simulator (GPS) to teach Air Force technicians diagnostic and repair procedures for specialized aircraft radar systems is described. An EC II simulator manufactured by Educational Computer Corporation was adapted to resemble the actual configuration technicians would encounter in the field. Data acquired in the…

  3. Notification Event Architecture for Traveler Screening: Predictive Traveler Screening Using Event Driven Business Process Management

    ERIC Educational Resources Information Center

    Lynch, John Kenneth

    2013-01-01

    Using an exploratory model of the 9/11 terrorists, this research investigates the linkages between Event Driven Business Process Management (edBPM) and decision making. Although the literature on the role of technology in efficient and effective decision making is extensive, research has yet to quantify the benefit of using edBPM to aid the…

  4. General Relativistic MHD Simulations of Jet Formation

    NASA Technical Reports Server (NTRS)

    Mizuno, Y.; Nishikawa, K.-I.; Hardee, P.; Koide, S.; Fishman, G. J.

    2005-01-01

    We have performed 3-dimensional general relativistic magnetohydrodynamic (GRMHD) simulations of jet formation from an accretion disk with/without initial perturbation around a rotating black hole. We input a sinusoidal perturbation (m = 5 mode) in the rotation velocity of the accretion disk. The simulation results show the formation of a relativistic jet from the accretion disk. Although the initial perturbation becomes weakened by the coupling among different modes, it survives and triggers lower modes. As a result, complex non-axisymmetric density structure develops in the disk and the jet. Newtonian MHD simulations of jet formation with a non-axisymmetric mode show the growth of the m = 2 mode but GRMHD simulations cannot see the clear growth of the m = 2 mode.

  5. A general software reliability process simulation technique

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1991-01-01

    The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.

  6. Spectral Methods in General Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Garrison, David

    2012-03-01

    In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.

  7. General simulation algorithm for autocorrelated binary processes

    NASA Astrophysics Data System (ADS)

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  8. Simulation of General Physics laboratory exercise

    NASA Astrophysics Data System (ADS)

    Aceituno, P.; Hernández-Aceituno, J.; Hernández-Cabrera, A.

    2015-01-01

    Laboratory exercises are an important part of general Physics teaching, both during the last years of high school and the first year of college education. Due to the need to acquire enough laboratory equipment for all the students, and the widespread access to computers rooms in teaching, we propose the development of computer simulated laboratory exercises. A representative exercise in general Physics is the calculation of the gravity acceleration value, through the free fall motion of a metal ball. Using a model of the real exercise, we have developed an interactive system which allows students to alter the starting height of the ball to obtain different fall times. The simulation was programmed in ActionScript 3, so that it can be freely executed in any operative system; to ensure the accuracy of the calculations, all the input parameters of the simulations were modelled using digital measurement units, and to allow a statistical management of the resulting data, measurement errors are simulated through limited randomization.

  9. Simulation of MTF experiments at General Fusion

    NASA Astrophysics Data System (ADS)

    Reynolds, Meritt; Froese, Aaron; Barsky, Sandra; Devietien, Peter; Toth, Gabor; Brennan, Dylan; Hooper, Bick

    2016-10-01

    General Fusion (GF) aims to develop a magnetized target fusion (MTF) power plant based on compression of magnetically-confined plasma by liquid metal. GF is testing this compression concept by collapsing solid aluminum liners onto spheromak or tokamak plasmas. To simulate the evolution of the compressing plasma in these experiments, we integrated a moving-mesh method into a finite-volume MHD code (VAC). The single-fluid model includes temperature-dependent resistivity and anisotropic heat transport. The trajectory of the liner is based on experiments and LS-DYNA simulations. During compression the geometry remains axially symmetric, but the MHD simulation is fully 3D to capture ideal and resistive plasma instabilities. We compare simulation to experiment through the primary diagnostic of Mirnov probes embedded in the inner coaxial surface against which the magnetic flux and plasma are compressed by the imploding liner. The MHD simulation reproduces the appearance of n=1 mode activity observed in experiments performed in negative D-shape geometry (MRT and PROSPECTOR machines). The same code predicts more favorable compression in spherical tokamak geometry, having positive D-shape (SPECTOR machine).

  10. Event-Driven Control for Networked Control Systems With Quantization and Markov Packet Losses.

    PubMed

    Yang, Hongjiu; Xu, Yang; Zhang, Jinhui

    2016-05-23

    In this paper, event-driven is used in a networked control system (NCS) which is subjected to the effect of quantization and packet losses. A discrete event-detector is used to monitor specific events in the NCS. Both an arbitrary region quantizer and Markov jump packet losses are also considered for the NCS. Based on zoom strategy and Lyapunov theory, a complete proof is given to guarantee mean square stability of the closed-loop system. Stabilization of the NCS is ensured by designing a feedback controller. Lastly, an inverted pendulum model is given to show the advantages and effectiveness of the proposed results.

  11. Modeling the energy performance of event-driven wireless sensor network by using static sink and mobile sink.

    PubMed

    Chen, Jiehui; Salim, Mariam B; Matsumoto, Mitsuji

    2010-01-01

    Wireless Sensor Networks (WSNs) designed for mission-critical applications suffer from limited sensing capacities, particularly fast energy depletion. Regarding this, mobile sinks can be used to balance the energy consumption in WSNs, but the frequent location updates of the mobile sinks can lead to data collisions and rapid energy consumption for some specific sensors. This paper explores an optimal barrier coverage based sensor deployment for event driven WSNs where a dual-sink model was designed to evaluate the energy performance of not only static sensors, but Static Sink (SS) and Mobile Sinks (MSs) simultaneously, based on parameters such as sensor transmission range r and the velocity of the mobile sink v, etc. Moreover, a MS mobility model was developed to enable SS and MSs to effectively collaborate, while achieving spatiotemporal energy performance efficiency by using the knowledge of the cumulative density function (cdf), Poisson process and M/G/1 queue. The simulation results verified that the improved energy performance of the whole network was demonstrated clearly and our eDSA algorithm is more efficient than the static-sink model, reducing energy consumption approximately in half. Moreover, we demonstrate that our results are robust to realistic sensing models and also validate the correctness of our results through extensive simulations.

  12. A general formulation for compositional reservoir simulation

    SciTech Connect

    Rodriguez, F.; Guzman, J.; Galindo-Nava, A. |

    1994-12-31

    In this paper the authors present a general formulation to solve the non-linear difference equations that arise in compositional reservoir simulation. The general approach here presented is based on newton`s method and provides a systematic approach to generate several formulations to solve the compositional problem, each possessing a different degree of implicitness and stability characteristics. The Fully-Implicit method is at the higher end of the implicitness spectrum while the IMPECS method, implicit in pressure-explicit in composition and saturation, is at the lower end. They show that all methods may be obtained as particular cases of the fully-implicit method. Regarding the matrix problem, all methods have a similar matrix structure; the composition of the Jacobian matrix is however unique in each case, being in some instances amenable to reductions for optimal solution of the matrix problem. Based on this, a different approach to derive IMPECS type methods is proposed; in this case, the whole set of 2nc + 6 equations, that apply in each gridblock, is reduced to a single pressure equation through matrix reduction operations; this provides a more stable numerical scheme, compared to other published IMPCS methods, in which the subset of thermodynamic equilibrium equations is arbitrarily decoupled form the set of gridblock equations to perform such reduction. The authors discuss how the general formulation here presented can be used to formulate and construct an adaptive-implicit compositional simulators. They also present results on the numerical performance of FI, IMPSEC and IMPECS methods on some test problems.

  13. Event-driven charge-coupled device design and applications therefor

    NASA Technical Reports Server (NTRS)

    Doty, John P. (Inventor); Ricker, Jr., George R. (Inventor); Burke, Barry E. (Inventor); Prigozhin, Gregory Y. (Inventor)

    2005-01-01

    An event-driven X-ray CCD imager device uses a floating-gate amplifier or other non-destructive readout device to non-destructively sense a charge level in a charge packet associated with a pixel. The output of the floating-gate amplifier is used to identify each pixel that has a charge level above a predetermined threshold. If the charge level is above a predetermined threshold the charge in the triggering charge packet and in the charge packets from neighboring pixels need to be measured accurately. A charge delay register is included in the event-driven X-ray CCD imager device to enable recovery of the charge packets from neighboring pixels for accurate measurement. When a charge packet reaches the end of the charge delay register, control logic either dumps the charge packet, or steers the charge packet to a charge FIFO to preserve it if the charge packet is determined to be a packet that needs accurate measurement. A floating-diffusion amplifier or other low-noise output stage device, which converts charge level to a voltage level with high precision, provides final measurement of the charge packets. The voltage level is eventually digitized by a high linearity ADC.

  14. General relativistic screening in cosmological simulations

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Paranjape, Aseem

    2016-10-01

    We revisit the issue of interpreting the results of large volume cosmological simulations in the context of large-scale general relativistic effects. We look for simple modifications to the nonlinear evolution of the gravitational potential ψ that lead on large scales to the correct, fully relativistic description of density perturbations in the Newtonian gauge. We note that the relativistic constraint equation for ψ can be cast as a diffusion equation, with a diffusion length scale determined by the expansion of the Universe. Exploiting the weak time evolution of ψ in all regimes of interest, this equation can be further accurately approximated as a Helmholtz equation, with an effective relativistic "screening" scale ℓ related to the Hubble radius. We demonstrate that it is thus possible to carry out N-body simulations in the Newtonian gauge by replacing Poisson's equation with this Helmholtz equation, involving a trivial change in the Green's function kernel. Our results also motivate a simple, approximate (but very accurate) gauge transformation—δN(k )≈δsim(k )×(k2+ℓ-2)/k2 —to convert the density field δsim of standard collisionless N -body simulations (initialized in the comoving synchronous gauge) into the Newtonian gauge density δN at arbitrary times. A similar conversion can also be written in terms of particle positions. Our results can be interpreted in terms of a Jeans stability criterion induced by the expansion of the Universe. The appearance of the screening scale ℓ in the evolution of ψ , in particular, leads to a natural resolution of the "Jeans swindle" in the presence of superhorizon modes.

  15. EVENT DRIVEN AUTOMATIC STATE MODIFICATION OF BNL'S BOOSTER FOR NASA SPACE RADIATION LABORATORY SOLAR PARTICLE SIMULATOR.

    SciTech Connect

    BROWN, D.; BINELLO, S.; HARVEY, M.; MORRIS, J.; RUSEK, A.; TSOUPAS, N.

    2005-05-16

    The NASA Space Radiation Laboratory (NSRL) was constructed in collaboration with NASA for the purpose of performing radiation effect studies for the NASA space program. The NSRL makes use of heavy ions in the range of 0.05 to 3 GeV/n slow extracted from BNL's AGS Booster. NASA is interested in reproducing the energy spectrum from a solar flare in the space environment for a single ion species. To do this we have built and tested a set of software tools which allow the state of the Booster and the NSRL beam line to be changed automatically. In this report we will describe the system and present results of beam tests.

  16. Event-driven visual attention for the humanoid robot iCub

    PubMed Central

    Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara

    2013-01-01

    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753

  17. Active on-demand service method based on event-driven architecture for geospatial data retrieval

    NASA Astrophysics Data System (ADS)

    Fan, Minghu; Fan, Hong; Chen, Nengcheng; Chen, Zeqiang; Du, Wu

    2013-07-01

    Timely on-demand access to geospatial data is necessary for environmental observation and disaster response. However, traditional service methods for acquiring geospatial data are inefficient and cumbersome, which is not beneficial for timely data acquisition. In these service methods, data are obtained and published by managers and are then left to users to discover and to retrieve them. To solve this problem, we propose an event-driven active on-demand data service method, for which a prototype based on sensor web technologies is demonstrated. First, we select a subset of observed properties as the attributes of an observation event of a data service system. Event-filtering technologies are then employed to find the data desired by users. Finally, the data that meet the subscription requirement are pushed to subscribers on time. The aims of the implementation of the method are to test the suitability of the observation and measurement (O&M) profile for Earth observation and OGC event pattern markup language (EML) specification. We determined the attributes of observation events according to the requirement of the data service and encoded observation event information using the OGC Observations and Measurements specification. We encoded the information under filtering conditions using the OGC Event Pattern Markup Language specification. We implemented a data service method that is based on event-driven architecture via a combination of some sensor web enablement services. Finally, we verified the feasibility of the method using MODIS data from the forest fires that occurred on February 7, 2009, in Victoria, Australia. The results show that the proposed method can achieve actively pushing the desired data to subscribers in the shortest possible time. O&M profiles for Earth observation and EML are suitable for the metadata encoding of observation events and the encoding of subscription information respectively. They match well for the data service in the system.

  18. WE-G-BRA-02: SafetyNet: Automating Radiotherapy QA with An Event Driven Framework

    SciTech Connect

    Hadley, S; Kessler, M; Litzenberg, D; Lee, C; Irrer, J; Chen, X; Acosta, E; Weyburne, G; Lam, K; Younge, K; Matuszak, M; Keranen, W; Covington, E; Moran, J

    2015-06-15

    Purpose: Quality assurance is an essential task in radiotherapy that often requires many manual tasks. We investigate the use of an event driven framework in conjunction with software agents to automate QA and eliminate wait times. Methods: An in house developed subscription-publication service, EventNet, was added to the Aria OIS to be a message broker for critical events occurring in the OIS and software agents. Software agents operate without user intervention and perform critical QA steps. The results of the QA are documented and the resulting event is generated and passed back to EventNet. Users can subscribe to those events and receive messages based on custom filters designed to send passing or failing results to physicists or dosimetrists. Agents were developed to expedite the following QA tasks: Plan Revision, Plan 2nd Check, SRS Winston-Lutz isocenter, Treatment History Audit, Treatment Machine Configuration. Results: Plan approval in the Aria OIS was used as the event trigger for plan revision QA and Plan 2nd check agents. The agents pulled the plan data, executed the prescribed QA, stored the results and updated EventNet for publication. The Winston Lutz agent reduced QA time from 20 minutes to 4 minutes and provided a more accurate quantitative estimate of radiation isocenter. The Treatment Machine Configuration agent automatically reports any changes to the Treatment machine or HDR unit configuration. The agents are reliable, act immediately, and execute each task identically every time. Conclusion: An event driven framework has inverted the data chase in our radiotherapy QA process. Rather than have dosimetrists and physicists push data to QA software and pull results back into the OIS, the software agents perform these steps immediately upon receiving the sentinel events from EventNet. Mr Keranen is an employee of Varian Medical Systems. Dr. Moran’s institution receives research support for her effort for a linear accelerator QA project from

  19. Generalized Maintenance Trainer Simulator: User Manual.

    DTIC Science & Technology

    1982-03-01

    be created in segments , each having up to 1800 images. Creation of the data base that controls the simulation is accomplished at the instructor...individual systems and from one target system to another. Since it also must present the simulated system in its normal operating condition, exercises can...For example, consider an equipment that operates in 20 frequency bands divided into two banks of 10 bands each, in which the two banks are identical in

  20. Innovation of IT metasystems by means of event-driven paradigm using QDMS

    NASA Astrophysics Data System (ADS)

    Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Eric, Milan

    2016-10-01

    Globalisation of world economy brings new and more complex demands to business systems. In order to respond to these trends, business systems apply new paradigms that are inevitable reflecting on management metasystems - quality assurance (QA), as well as on information technology (IT) metasystems. Small and medium enterprises (in particular in food industry) do not have possibilities to access external resources to the extent that could provide adequate keeping up with these trends. That raises the question how to enhance synergetic effect of interaction between existing QA and IT metasystems in order to overcome resource gap and achieve set goals by internal resources. The focus of this article is to propose a methodology for utilisation of potential of quality assurance document management system (QDMS) as prototypical platform for initiating, developing, testing and improving new functionalities that are required by IT as support for buiness system management. In that way QDMS plays a role of catalyst that not only accelerates but could also enhance selectivity of the reactions of QA and IT metasystems and direct them on finding new functionalities based on event-driven paradigm. The article tries to show the process of modelling, development and implementation of a possible approach to this problem through conceptual survey and practical solution in the food industry.

  1. Heart rate regulation during cycle-ergometer exercise via event-driven biofeedback.

    PubMed

    Argha, Ahmadreza; Su, Steven W; Celler, Branko G

    2017-03-01

    This paper is devoted to the problem of regulating the heart rate response along a predetermined reference profile, for cycle-ergometer exercises designed for training or cardio-respiratory rehabilitation. The controller designed in this study is a non-conventional, non-model-based, proportional, integral and derivative (PID) controller. The PID controller commands can be transmitted as biofeedback auditory commands, which can be heard and interpreted by the exercising subject to increase or reduce exercise intensity. However, in such a case, for the purposes of effectively communicating to the exercising subject a change in the required exercise intensity, the timing of this feedback signal relative to the position of the pedals becomes critical. A feedback signal delivered when the pedals are not in a suitable position to efficiently exert force may be ineffective and this may, in turn, lead to the cognitive disengagement of the user from the feedback controller. This note examines a novel form of control system which has been expressly designed for this project. The system is called an "actuator-based event-driven control system". The proposed control system was experimentally verified using 24 healthy male subjects who were randomly divided into two separate groups, along with cross-validation scheme. A statistical analysis was employed to test the generalisation of the PID tunes, derived based on the average transfer functions of the two groups, and it revealed that there were no significant differences between the mean values of root mean square of the tracking error of two groups (3.9 vs. 3.7 bpm, [Formula: see text]). Furthermore, the results of a second statistical hypothesis test showed that the proposed PID controller with novel synchronised biofeedback mechanism has better performance compared to a conventional PID controller with a fixed-rate biofeedback mechanism (Group 1: 3.9 vs. 5.0 bpm, Group 2: 3.7 vs. 4.4 bpm, [Formula: see text]).

  2. Real-time gesture interface based on event-driven processing from stereo silicon retinas.

    PubMed

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael; Park, Paul K J; Shin, Chang-Woo; Ryu, Hyunsurk Eric; Kang, Byung Chang

    2014-12-01

    We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naïve users.

  3. Event-driven time-optimal control for a class of discontinuous bioreactors.

    PubMed

    Moreno, Jaime A; Betancur, Manuel J; Buitrón, Germán; Moreno-Andrade, Iván

    2006-07-05

    Discontinuous bioreactors may be further optimized for processing inhibitory substrates using a convenient fed-batch mode. To do so the filling rate must be controlled in such a way as to push the reaction rate to its maximum value, by increasing the substrate concentration just up to the point where inhibition begins. However, an exact optimal controller requires measuring several variables (e.g., substrate concentrations in the feed and in the tank) and also good model knowledge (e.g., yield and kinetic parameters), requirements rarely satisfied in real applications. An environmentally important case, that exemplifies all these handicaps, is toxicant wastewater treatment. There the lack of online practical pollutant sensors may allow unforeseen high shock loads to be fed to the bioreactor, causing biomass inhibition that slows down the treatment process and, in extreme cases, even renders the biological process useless. In this work an event-driven time-optimal control (ED-TOC) is proposed to circumvent these limitations. We show how to detect a "there is inhibition" event by using some computable function of the available measurements. This event drives the ED-TOC to stop the filling. Later, by detecting the symmetric event, "there is no inhibition," the ED-TOC may restart the filling. A fill-react cycling then maintains the process safely hovering near its maximum reaction rate, allowing a robust and practically time-optimal operation of the bioreactor. An experimental study case of a wastewater treatment process application is presented. There the dissolved oxygen concentration was used to detect the events needed to drive the controller.

  4. General Aviation Cockpit Weather Information System Simulation Studies

    NASA Technical Reports Server (NTRS)

    McAdaragh, Ray; Novacek, Paul

    2003-01-01

    This viewgraph presentation provides information on two experiments on the effectiveness of a cockpit weather information system on a simulated general aviation flight. The presentation covers the simulation hardware configuration, the display device screen layout, a mission scenario, conclusions, and recommendations. The second experiment, with its own scenario and conclusions, is a follow-on experiment.

  5. Test and evaluation of the generalized gate logic system simulator

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.

    1991-01-01

    The results of the initial testing of the Generalized Gate Level Logic Simulator (GGLOSS) are discussed. The simulator is a special purpose fault simulator designed to assist in the analysis of the effects of random hardware failures on fault tolerant digital computer systems. The testing of the simulator covers two main areas. First, the simulation results are compared with data obtained by monitoring the behavior of hardware. The circuit used for these comparisons is an incomplete microprocessor design based upon the MIL-STD-1750A Instruction Set Architecture. In the second area of testing, current simulation results are compared with experimental data obtained using precursors of the current tool. In each case, a portion of the earlier experiment is confirmed. The new results are then viewed from a different perspective in order to evaluate the usefulness of this simulation strategy.

  6. A Comparison of General Case In Vivo and General Case Simulation Plus In Vivo Training.

    ERIC Educational Resources Information Center

    McDonnell, John J.; Ferguson, Brad

    1988-01-01

    The study examined the relative effectiveness and efficiency of general case in vivo and general case simulation plus in vivo training in teaching six students with moderate and severe disabilities to purchase items in fast-food restaurants. Although both strategies led to reliable performance in nontrained settings, the in vivo instruction…

  7. Sampling of general correlators in worm-algorithm based simulations

    NASA Astrophysics Data System (ADS)

    Rindlisbacher, Tobias; Åkerlund, Oscar; de Forcrand, Philippe

    2016-08-01

    Using the complex ϕ4-model as a prototype for a system which is simulated by a worm algorithm, we show that not only the charged correlator <ϕ* (x) ϕ (y) >, but also more general correlators such as < | ϕ (x) | | ϕ (y) | > or < arg ⁡ (ϕ (x)) arg ⁡ (ϕ (y)) >, as well as condensates like < | ϕ | >, can be measured at every step of the Monte Carlo evolution of the worm instead of on closed-worm configurations only. The method generalizes straightforwardly to other systems simulated by worms, such as spin or sigma models.

  8. The development of an interim generalized gate logic software simulator

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Nemeroff, S.

    1985-01-01

    A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer.

  9. The power of event-driven analytics in Large Scale Data Processing

    SciTech Connect

    2011-02-24

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is

  10. The power of event-driven analytics in Large Scale Data Processing

    ScienceCinema

    None

    2016-07-12

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is

  11. Optimal Weights in Serial Generalized-Ensemble Simulations.

    PubMed

    Chelli, Riccardo

    2010-07-13

    In serial generalized-ensemble simulations, the sampling of a collective coordinate of a system is enhanced through non-Boltzmann weighting schemes. A popular version of such methods is certainly the simulated tempering technique, which is based on a random walk in temperature ensembles to explore the phase space more thoroughly. The most critical aspect of serial generalized-ensemble methods with respect to their parallel counterparts, such as replica exchange, is the difficulty of weight determination. Here we propose an adaptive approach to update the weights on the fly during the simulation. The algorithm is based on generalized forms of the Bennett acceptance ratio and of the free energy perturbation. It does not require intensive communication between processors and, therefore, is prone to be used in distributed computing environments with modest computational cost. We illustrate the method in a series of molecular dynamics simulations of a model system and compare its performances to two recent approaches, one based on adaptive Bayesian-weighted histogram analysis and the other based on initial estimates of weight factors obtained by potential energy averages.

  12. The architecture of Newton, a general-purpose dynamics simulator

    NASA Technical Reports Server (NTRS)

    Cremer, James F.; Stewart, A. James

    1989-01-01

    The architecture for Newton, a general-purpose system for simulating the dynamics of complex physical objects, is described. The system automatically formulates and analyzes equations of motion, and performs automatic modification of this system equations when necessitated by changes in kinematic relationships between objects. Impact and temporary contact are handled, although only using simple models. User-directed influence of simulations is achieved using Newton's module, which can be used to experiment with the control of many-degree-of-freedom articulated objects.

  13. Generalized Fluid System Simulation Program (GFSSP) Version 6 - General Purpose Thermo-Fluid Network Analysis Software

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Leclair, Andre; Moore, Ric; Schallhorn, Paul

    2011-01-01

    GFSSP stands for Generalized Fluid System Simulation Program. It is a general-purpose computer program to compute pressure, temperature and flow distribution in a flow network. GFSSP calculates pressure, temperature, and concentrations at nodes and calculates flow rates through branches. It was primarily developed to analyze Internal Flow Analysis of a Turbopump Transient Flow Analysis of a Propulsion System. GFSSP development started in 1994 with an objective to provide a generalized and easy to use flow analysis tool for thermo-fluid systems.

  14. A High-Speed, Event-Driven, Active Pixel Sensor Readout for Photon-Counting Microchannel Plate Detectors

    NASA Technical Reports Server (NTRS)

    Kimble, Randy A.; Pain, Bedabrata; Norton, Timothy J.; Haas, J. Patrick; Oegerle, William R. (Technical Monitor)

    2002-01-01

    Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest of by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.

  15. A High-Speed, Event-Driven, Active Pixel Sensor Readout for Photon-Counting Microchannel Plate Detectors

    NASA Technical Reports Server (NTRS)

    Kimble, Randy A.; Pain, B.; Norton, T. J.; Haas, P.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution for the readout while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest or by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.

  16. The Architecture of Newton, a General-Purpose Dynamics Simulator

    DTIC Science & Technology

    1989-01-01

    11 N The Architecture of Newton, a General-Purpose Dynamics 0 Simulator OTIC James F. Cremer ELECTE A. James Stewart JUL 141989f l Computer Science...173SS, ONR grant N00t4.SK-0281 and DARPA grant N0014-OOK.0S91 Support for James Stewart is provided in part by U.S. Army Math-4.3 Control matica Sciences

  17. Accurate Event-Driven Motion Compensation in High-Resolution PET Incorporating Scattered and Random Events

    PubMed Central

    Dinelle, Katie; Cheng, Ju-Chieh; Shilov, Mikhail A.; Segars, William P.; Lidstone, Sarah C.; Blinder, Stephan; Rousset, Olivier G.; Vajihollahi, Hamid; Tsui, Benjamin M. W.; Wong, Dean F.; Sossi, Vesna

    2010-01-01

    With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented. PMID:18672420

  18. Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models

    NASA Astrophysics Data System (ADS)

    Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido

    2016-06-01

    We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.

  19. The Speedster-EXD- A New Event-Driven Hybrid CMOS X-ray Detector

    NASA Astrophysics Data System (ADS)

    Griffith, Christopher V.; Falcone, Abraham D.; Prieskorn, Zachary R.; Burrows, David N.

    2016-01-01

    The Speedster-EXD is a new 64×64 pixel, 40-μm pixel pitch, 100-μm depletion depth hybrid CMOS x-ray detector with the capability of reading out only those pixels containing event charge, thus enabling fast effective frame rates. A global charge threshold can be specified, and pixels containing charge above this threshold are flagged and read out. The Speedster detector has also been designed with other advanced in-pixel features to improve performance, including a low-noise, high-gain capacitive transimpedance amplifier that eliminates interpixel capacitance crosstalk (IPC), and in-pixel correlated double sampling subtraction to reduce reset noise. We measure the best energy resolution on the Speedster-EXD detector to be 206 eV (3.5%) at 5.89 keV and 172 eV (10.0%) at 1.49 keV. The average IPC to the four adjacent pixels is measured to be 0.25%±0.2% (i.e., consistent with zero). The pixel-to-pixel gain variation is measured to be 0.80%±0.03%, and a Monte Carlo simulation is applied to better characterize the contributions to the energy resolution.

  20. Solute transport processes in flow-event-driven stream-aquifer interaction

    NASA Astrophysics Data System (ADS)

    Xie, Yueqing; Cook, Peter G.; Simmons, Craig T.

    2016-07-01

    The interaction between streams and groundwater controls key features of the stream hydrograph and chemograph. Since surface runoff is usually less saline than groundwater, flow events are usually accompanied by declines in stream salinity. In this paper, we use numerical modelling to show that, at any particular monitoring location: (i) the increase in stream stage associated with a flow event will precede the decrease in solute concentration (arrival time lag for solutes); and (ii) the decrease in stream stage following the flow peak will usually precede the subsequent return (increase) in solute concentration (return time lag). Both arrival time lag and return time lag increase with increasing wave duration. However, arrival time lag decreases with increasing wave amplitude, whereas return time lag increases. Furthermore, while arrival time lag is most sensitive to parameters that control river velocity (channel roughness and stream slope), return time lag is most sensitive to groundwater parameters (aquifer hydraulic conductivity, recharge rate, and dispersitivity). Additionally, the absolute magnitude of the decrease in river concentration is sensitive to both river and groundwater parameters. Our simulations also show that in-stream mixing is dominated by wave propagation and bank storage processes, and in-stream dispersion has a relatively minor effect on solute concentrations. This has important implications for spreading of contaminants released to streams. Our work also demonstrates that a high contribution of pre-event water (or groundwater) within the flow hydrograph can be caused by the combination of in-stream and bank storage exchange processes, and does not require transport of pre-event water through the catchment.

  1. Automatic CT simulation optimization for radiation therapy: A general strategy

    SciTech Connect

    Li, Hua Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M.; Mutic, Sasa; Yu, Lifeng; Anastasio, Mark A.; Low, Daniel A.

    2014-03-15

    Purpose: In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. Methods: The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Results: Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube

  2. A General Relativistic Magnetohydrodynamic Simulation of Jet Formation

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Richardson, G.; Koide, S.; Shibata, K.; Kudoh, T.; Hardee, P.; Fishman, G. J.

    2005-01-01

    We have performed a fully three-dimensional general relativistic magnetohydrodynamic (GRMHD) simulation ofjet formation from a thin accretion disk around a Schwarzschild black hole with a free-falling corona. The initial simulation results show that a bipolar jet (velocity approx.0.3c) is created, as shown by previous two-dimensional axi- symmetric simulations with mirror symmetry at the equator. The three-dimensional simulation ran over 100 light crossing time units (T(sub s) = r(sub s)/c, where r(sub s = 2GM/c(sup 2), which is considerably longer than the previous simulations. We show that the jet is initially formed as predicted owing in part to magnetic pressure from the twisting of the initially uniform magnetic field and from gas pressure associated with shock formation in the region around r = 3r(sub s). At later times, the accretion disk becomes thick and the jet fades resulting in a wind that is ejected from the surface ofthe thickened (torus-like) disk. It should be noted that no streaming matter from a donor is included at the outer boundary in the simulation (an isolated black hole not binary black hole). The wind flows outward with a wider angle than the initial jet. The widening of the jet is consistent with the outward-moving torsional Alfven waves. This evolution of disk-jet coupling suggests that the jet fades with a thickened accretion disk because of the iack of streaming materiai from an accompanying star.

  3. A Distributed Laboratory for Event-Driven Coastal Prediction and Hazard Planning

    NASA Astrophysics Data System (ADS)

    Bogden, P.; Allen, G.; MacLaren, J.; Creager, G. J.; Flournoy, L.; Sheng, Y. P.; Graber, H.; Graves, S.; Conover, H.; Luettich, R.; Perrie, W.; Ramakrishnan, L.; Reed, D. A.; Wang, H. V.

    2006-12-01

    The 2005 Atlantic hurricane season was the most active in recorded history. Collectively, 2005 hurricanes caused more than 2,280 deaths and record damages of over 100 billion dollars. Of the storms that made landfall, Dennis, Emily, Katrina, Rita, and Wilma caused most of the destruction. Accurate predictions of storm-driven surge, wave height, and inundation can save lives and help keep recovery costs down, provided the information gets to emergency response managers in time. The information must be available well in advance of landfall so that responders can weigh the costs of unnecessary evacuation against the costs of inadequate preparation. The SURA Coastal Ocean Observing and Prediction (SCOOP) Program is a multi-institution collaboration implementing a modular, distributed service-oriented architecture for real time prediction and visualization of the impacts of extreme atmospheric events. The modular infrastructure enables real-time prediction of multi- scale, multi-model, dynamic, data-driven applications. SURA institutions are working together to create a virtual and distributed laboratory integrating coastal models, simulation data, and observations with computational resources and high speed networks. The loosely coupled architecture allows teams of computer and coastal scientists at multiple institutions to innovate complex system components that are interconnected with relatively stable interfaces. The operational system standardizes at the interface level to enable substantial innovation by complementary communities of coastal and computer scientists. This architectural philosophy solves a long-standing problem associated with the transition from research to operations. The SCOOP Program thereby implements a prototype laboratory consistent with the vision of a national, multi-agency initiative called the Integrated Ocean Observing System (IOOS). Several service- oriented components of the SCOOP enterprise architecture have already been designed and

  4. Characterization and development of an event-driven hybrid CMOS x-ray detector

    NASA Astrophysics Data System (ADS)

    Griffith, Christopher

    2015-06-01

    Hybrid CMOS detectors (HCD) have provided great benefit to the infrared and optical fields of astronomy, and they are poised to do the same for X-ray astronomy. Infrared HCDs have already flown on the Hubble Space Telescope and the Wide-Field Infrared Survey Explorer (WISE) mission and are slated to fly on the James Webb Space Telescope (JWST). Hybrid CMOS X-ray detectors offer low susceptibility to radiation damage, low power consumption, and fast readout time to avoid pile-up. The fast readout time is necessary for future high throughput X-ray missions. The Speedster-EXD X-ray HCD presented in this dissertation offers new in-pixel features and reduces known noise sources seen on previous generation HCDs. The Speedster-EXD detector makes a great step forward in the development of these detectors for future space missions. This dissertation begins with an overview of future X-ray space mission concepts and their detector requirements. The background on the physics of semiconductor devices and an explanation of the detection of X-rays with these devices will be discussed followed by a discussion on CCDs and CMOS detectors. Next, hybrid CMOS X-ray detectors will be explained including their advantages and disadvantages. The Speedster-EXD detector and its new features will be outlined including its ability to only read out pixels which contain X-ray events. Test stand design and construction for the Speedster-EXD detector is outlined and the characterization of each parameter on two Speedster-EXD detectors is detailed including read noise, dark current, interpixel capacitance crosstalk (IPC), and energy resolution. Gain variation is also characterized, and a Monte Carlo simulation of its impact on energy resolution is described. This analysis shows that its effect can be successfully nullified with proper calibration, which would be important for a flight mission. Appendix B contains a study of the extreme tidal disruption event, Swift J1644+57, to search for

  5. Localized and generalized simulated wear of resin composites.

    PubMed

    Barkmeier, W W; Takamizawa, T; Erickson, R L; Tsujimoto, A; Latta, M; Miyazaki, M

    2015-01-01

    A laboratory study was conducted to examine the wear of resin composite materials using both a localized and generalized wear simulation model. Twenty specimens each of seven resin composites (Esthet•X HD [HD], Filtek Supreme Ultra [SU], Herculite Ultra [HU], SonicFill [SF], Tetric EvoCeram Bulk Fill [TB], Venus Diamond [VD], and Z100 Restorative [Z]) were subjected to a wear challenge of 400,000 cycles for both localized and generalized wear in a Leinfelder-Suzuki wear simulator (Alabama machine). The materials were placed in custom cylinder-shaped stainless steel fixtures. A stainless steel ball bearing (r=2.387 mm) was used as the antagonist for localized wear, and a stainless steel, cylindrical antagonist with a flat tip was used for generalized wear. A water slurry of polymethylmethacrylate (PMMA) beads was used as the abrasive media. A noncontact profilometer (Proscan 2100) with Proscan software was used to digitize the surface contours of the pretest and posttest specimens. AnSur 3D software was used for wear assessment. For localized testing, maximum facet depth (μm) and volume loss (mm(3)) were used to compare the materials. The mean depth of the facet surface (μm) and volume loss (mm(3)) were used for comparison of the generalized wear specimens. A one-way analysis of variance (ANOVA) and Tukey post hoc test were used for data analysis of volume loss for both localized and generalized wear, maximum facet depth for localized wear, and mean depth of the facet for generalized wear. The results for localized wear simulation were as follows [mean (standard deviation)]: maximum facet depth (μm)--Z, 59.5 (14.7); HU, 99.3 (16.3); SU, 102.8 (13.8); HD, 110.2 (13.3); VD, 114.0 (10.3); TB, 125.5 (12.1); SF, 195.9 (16.9); volume loss (mm(3))--Z, 0.013 (0.002); SU, 0.026 (0.006); HU, 0.043 (0.008); VD, 0.057 (0.009); HD, 0.058 (0.014); TB, 0.061 (0.010); SF, 0.135 (0.024). Generalized wear simulation results were as follows: mean depth of facet (μm)--Z, 9.3 (3

  6. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    One of the largest continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available. Approaches used in Earth science research such as case study analysis and climatology studies involve gathering discovering and gathering diverse data sets and information to support the research goals. Research based on case studies involves a detailed description of specific weather events using data from different sources, to characterize physical processes in play for a specific event. Climatology-based research tends to focus on the representativeness of a given event, by studying the characteristics and distribution of a large number of events. This allows researchers to generalize characteristics such as spatio-temporal distribution, intensity, annual cycle, duration, etc. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the datasets of interest can obtain the specific files they need using these systems. However, in cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. In these cases, a search process needs to be organized around the event rather than observing instruments. In addition, the existing data systems assume users have sufficient knowledge regarding the domain vocabulary to be able to effectively utilize their catalogs. These systems do not support new or interdisciplinary researchers who may be unfamiliar with the domain terminology. This paper presents a specialized search, aggregation and curation tool for Earth science to address these existing

  7. A generalized Poisson solver for first-principles device simulations

    SciTech Connect

    Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost; Brück, Sascha; Luisier, Mathieu

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.

  8. Better Space Construction Decisions by Instructional Program Simulation Utilizing the Generalized Academic Simulation Programs.

    ERIC Educational Resources Information Center

    Apker, Wesley

    This school district utilized the generalized academic simulation programs (GASP) to assist in making decisions regarding the kinds of facilities that should be constructed at Pilchuck Senior High School. Modular scheduling was one of the basic educational parameters used in determining the number and type of facilities needed. The objectives of…

  9. Event-driven, pattern-based methodology for cost-effective development of standardized personal health devices.

    PubMed

    Martínez-Espronceda, Miguel; Trigo, Jesús D; Led, Santiago; Barrón-González, H Gilberto; Redondo, Javier; Baquero, Alfonso; Serrano, Luis

    2014-11-01

    Experiences applying standards in personal health devices (PHDs) show an inherent trade-off between interoperability and costs (in terms of processing load and development time). Therefore, reducing hardware and software costs as well as time-to-market is crucial for standards adoption. The ISO/IEEE11073 PHD family of standards (also referred to as X73PHD) provides interoperable communication between PHDs and aggregators. Nevertheless, the responsibility of achieving inexpensive implementations of X73PHD in limited resource microcontrollers falls directly on the developer. Hence, the authors previously presented a methodology based on patterns to implement X73-compliant PHDs into devices with low-voltage low-power constraints. That version was based on multitasking, which required additional features and resources. This paper therefore presents an event-driven evolution of the patterns-based methodology for cost-effective development of standardized PHDs. The results of comparing between the two versions showed that the mean values of decrease in memory consumption and cycles of latency are 11.59% and 45.95%, respectively. In addition, several enhancements in terms of cost-effectiveness and development time can be derived from the new version of the methodology. Therefore, the new approach could help in producing cost-effective X73-compliant PHDs, which in turn could foster the adoption of standards.

  10. Event driven executive

    NASA Technical Reports Server (NTRS)

    Tulpule, Bhalchandra R. (Inventor); Collins, Robert E. (Inventor); Cheetham, John (Inventor); Cornwell, Smith (Inventor)

    1990-01-01

    Tasks may be planned for execution on a single processor or are split up by the designer for execution among a plurality of signal processors. The tasks are modeled using a design aid called a precedence graph, from which a dependency table and a prerequisite table are established for reference within each processor. During execution, at the completion of a given task, an end of task interrupt is provided from any processor which has completed a task to any and all other processors including itself in which completion of that task is a prerequisite for commencement of any dependent tasks. The relevant updated data may be transferred by the processor either before or after signalling task completion to the processors needing the updated data prior to commencing execution of the dependent tasks. Coherency may be ensured, however, by sending the data before the interrupt. When the end of task interrupt is received in a processor, its dependency table is consulted to determine those tasks dependent upon completion of the task which has just been signalled as completed, and task dependency signals indicative thereof are provided and stored in a current status list of a prerequisite table. The current status of all current prerequisites are compared to the complete prerequisites listed for all affected tasks and those tasks for which the comparison indicates that all prerequisites have been met are queued for execution in a selected order.

  11. A General Simulator for Reaction-Based Biogeochemical Processes

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Yeh, George

    2006-02-01

    As more complex biogeochemical situations are being investigated (e.g., evolving reactivity, passivation of reactive surfaces, dissolution of sorbates), there is a growing need for biogeochemical simulators to flexibly and facilely address new reaction forms and rate laws. This paper presents an approach that accommodates this need to efficiently simulate general biogeochemical processes, while insulating the user from additional code development. The approach allows for the automatic extraction of fundamental reaction stoichiometry and thermodynamics from a standard chemistry database, and the symbolic entry of arbitrarily complex user-specified reaction forms, rate laws, and equilibria. The user-specified equilibrium and kinetic reactions (i.e., reactions not defined in the format of the standardized database) are interpreted by the Maple symbolic mathematical software package. FORTRAN 90 code is then generated by Maple for (1) the analytical Jacobian matrix (if preferred over the numerical Jacobian matrix) used in the Newton-Raphson solution procedure, and (2) the residual functions for user-specified equilibrium expressions and rate laws. Matrix diagonalization eliminates the need to conceptualize the system of reactions as a tableau, while identifying a minimum rank set of basis species with enhanced numerical convergence properties. The newly generated code, which is designed to operate in the BIOGEOCHEM biogeochemical simulator, is then compiled and linked into the BIOGEOCHEM executable. With these features, users can avoid recoding the simulator to accept new equilibrium expressions or kinetic rate laws, while still taking full advantage of the stoichiometry and thermodynamics provided by an existing chemical database. Thus, the approach introduces efficiencies in the specification of biogeochemical reaction networks and eliminates opportunities for mistakes in preparing input files and coding errors. Test problems are used to demonstrate the features of

  12. General Relativistic Radiative Transfer and General Relativistic MHD Simulations of Accretion and Outflows of Black Holes

    NASA Technical Reports Server (NTRS)

    Fuerst, Steven V.; Mizuno, Yosuke; Nishikawa, Ken-Ichi; Wu, Kinwah

    2007-01-01

    We have calculated the emission from relativistic flows in black hole systems using a fully general relativistic radiative transfer, with flow structures obtained by general relativistic magnetohydrodynamic simulations. We consider thermal free-free emission and thermal synchrotron emission. Bright filament-like features are found protruding (visually) from the accretion disk surface, which are enhancements of synchrotron emission when the magnetic field is roughly aligned with the line-of-sight in the co-moving frame. The features move back and forth as the accretion flow evolves, but their visibility and morphology are robust. We propose that variations and location drifts of the features are responsible for certain X-ray quasi-periodic oscillations (QPOs) observed in black-hole X-ray binaries.

  13. A generalized well management scheme for reservoir simulation

    SciTech Connect

    Fang, W.Y.; Lo, K.K.

    1995-12-31

    A new generalized well management scheme has been formulated to maximize oil production under multiple facility constraints. The scheme integrates reserve performance, wellbore hydraulics, surface facility constraints and lift-gas allocation o maximize oil production. It predicts well performance based on up-to-date hydraulics and reservoir conditions. The scheme has been implemented in a black oil simulator by using Separable programming and Simplex algorithm. This production optimization scheme has been applied to two full-field models. The oil production of these two full-field models is limited by water, gas and liquid haling limits at both field- and flow station-levels. The gas production is limited by injectivity as well as gas handling limits. For a 12-year production forecast on Field A, the new scheme increased oil production by 3 to 9%. For a 12-year production forecast on field B, the new scheme increased oil production by 7 to 9%.

  14. Amyloid oligomer structure characterization from simulations: A general method

    SciTech Connect

    Nguyen, Phuong H.; Li, Mai Suan

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  15. Amyloid oligomer structure characterization from simulations: A general method

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuong H.; Li, Mai Suan; Derreumaux, Philippe

    2014-03-01

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ9-40, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  16. Amyloid oligomer structure characterization from simulations: a general method.

    PubMed

    Nguyen, Phuong H; Li, Mai Suan; Derreumaux, Philippe

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ9-40, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  17. Generalized Fluid System Simulation Program, Version 6.0

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; LeClair, A. C.; Moore, A.; Schallhorn, P. A.

    2013-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependant flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 25 demonstrated example problems.

  18. Generalized Fluid System Simulation Program (GFSSP) - Version 6

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; LeClair, Andre; Moore, Ric; Schallhorn, Paul

    2015-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, flow control valves and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. Users can introduce new physics, non-linear and time-dependent boundary conditions through user-subroutine.

  19. Generalized Fluid System Simulation Program, Version 5.0-Educational

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.

    2011-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the point, drag and click method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids and 21 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 12 demonstrated example problems.

  20. Diffusion microscopist simulator: a general Monte Carlo simulation system for diffusion magnetic resonance imaging.

    PubMed

    Yeh, Chun-Hung; Schmitt, Benoît; Le Bihan, Denis; Li-Schlittgen, Jing-Rebecca; Lin, Ching-Po; Poupon, Cyril

    2013-01-01

    This article describes the development and application of an integrated, generalized, and efficient Monte Carlo simulation system for diffusion magnetic resonance imaging (dMRI), named Diffusion Microscopist Simulator (DMS). DMS comprises a random walk Monte Carlo simulator and an MR image synthesizer. The former has the capacity to perform large-scale simulations of Brownian dynamics in the virtual environments of neural tissues at various levels of complexity, and the latter is flexible enough to synthesize dMRI datasets from a variety of simulated MRI pulse sequences. The aims of DMS are to give insights into the link between the fundamental diffusion process in biological tissues and the features observed in dMRI, as well as to provide appropriate ground-truth information for the development, optimization, and validation of dMRI acquisition schemes for different applications. The validity, efficiency, and potential applications of DMS are evaluated through four benchmark experiments, including the simulated dMRI of white matter fibers, the multiple scattering diffusion imaging, the biophysical modeling of polar cell membranes, and the high angular resolution diffusion imaging and fiber tractography of complex fiber configurations. We expect that this novel software tool would be substantially advantageous to clarify the interrelationship between dMRI and the microscopic characteristics of brain tissues, and to advance the biophysical modeling and the dMRI methodologies.

  1. Hospitable archean climates simulated by a general circulation model.

    PubMed

    Wolf, E T; Toon, O B

    2013-07-01

    Evidence from ancient sediments indicates that liquid water and primitive life were present during the Archean despite the faint young Sun. To date, studies of Archean climate typically utilize simplified one-dimensional models that ignore clouds and ice. Here, we use an atmospheric general circulation model coupled to a mixed-layer ocean model to simulate the climate circa 2.8 billion years ago when the Sun was 20% dimmer than it is today. Surface properties are assumed to be equal to those of the present day, while ocean heat transport varies as a function of sea ice extent. Present climate is duplicated with 0.06 bar of CO2 or alternatively with 0.02 bar of CO2 and 0.001 bar of CH4. Hot Archean climates, as implied by some isotopic reconstructions of ancient marine cherts, are unattainable even in our warmest simulation having 0.2 bar of CO2 and 0.001 bar of CH4. However, cooler climates with significant polar ice, but still dominated by open ocean, can be maintained with modest greenhouse gas amounts, posing no contradiction with CO2 constraints deduced from paleosols or with practical limitations on CH4 due to the formation of optically thick organic hazes. Our results indicate that a weak version of the faint young Sun paradox, requiring only that some portion of the planet's surface maintain liquid water, may be resolved with moderate greenhouse gas inventories. Thus, hospitable late Archean climates are easily obtained in our climate model.

  2. Extension of Generalized Fluid System Simulation Program's Fluid Property Database

    NASA Technical Reports Server (NTRS)

    Patel, Kishan

    2011-01-01

    This internship focused on the development of additional capabilities for the General Fluid Systems Simulation Program (GFSSP). GFSSP is a thermo-fluid code used to evaluate system performance by a finite volume-based network analysis method. The program was developed primarily to analyze the complex internal flow of propulsion systems and is capable of solving many problems related to thermodynamics and fluid mechanics. GFSSP is integrated with thermodynamic programs that provide fluid properties for sub-cooled, superheated, and saturation states. For fluids that are not included in the thermodynamic property program, look-up property tables can be provided. The look-up property tables of the current release version can only handle sub-cooled and superheated states. The primary purpose of the internship was to extend the look-up tables to handle saturated states. This involves a) generation of a property table using REFPROP, a thermodynamic property program that is widely used, and b) modifications of the Fortran source code to read in an additional property table containing saturation data for both saturated liquid and saturated vapor states. Also, a method was implemented to calculate the thermodynamic properties of user-fluids within the saturation region, given values of pressure and enthalpy. These additions required new code to be written, and older code had to be adjusted to accommodate the new capabilities. Ultimately, the changes will lead to the incorporation of this new capability in future versions of GFSSP. This paper describes the development and validation of the new capability.

  3. Sensitivity simulations of superparameterised convection in a general circulation model

    NASA Astrophysics Data System (ADS)

    Rybka, Harald; Tost, Holger

    2015-04-01

    Cloud Resolving Models (CRMs) covering a horizontal grid spacing from a few hundred meters up to a few kilometers have been used to explicitly resolve small-scale and mesoscale processes. Special attention has been paid to realistically represent cloud dynamics and cloud microphysics involving cloud droplets, ice crystals, graupel and aerosols. The entire variety of physical processes on the small-scale interacts with the larger-scale circulation and has to be parameterised on the coarse grid of a general circulation model (GCM). Since more than a decade an approach to connect these two types of models which act on different scales has been developed to resolve cloud processes and their interactions with the large-scale flow. The concept is to use an ensemble of CRM grid cells in a 2D or 3D configuration in each grid cell of the GCM to explicitly represent small-scale processes avoiding the use of convection and large-scale cloud parameterisations which are a major source for uncertainties regarding clouds. The idea is commonly known as superparameterisation or cloud-resolving convection parameterisation. This study presents different simulations of an adapted Earth System Model (ESM) connected to a CRM which acts as a superparameterisation. Simulations have been performed with the ECHAM/MESSy atmospheric chemistry (EMAC) model comparing conventional GCM runs (including convection and large-scale cloud parameterisations) with the improved superparameterised EMAC (SP-EMAC) modeling one year with prescribed sea surface temperatures and sea ice content. The sensitivity of atmospheric temperature, precipiation patterns, cloud amount and types is observed changing the embedded CRM represenation (orientation, width, no. of CRM cells, 2D vs. 3D). Additionally, we also evaluate the radiation balance with the new model configuration, and systematically analyse the impact of tunable parameters on the radiation budget and hydrological cycle. Furthermore, the subgrid

  4. Generalized Fluid System Simulation Program, Version 6.0

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; LeClair, A. C.; Moore, R.; Schallhorn, P. A.

    2016-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a general purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. Two thermodynamic property programs (GASP/WASP and GASPAK) provide required thermodynamic and thermophysical properties for 36 fluids: helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, parahydrogen, water, kerosene (RP-1), isobutene, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, R-11, R-12, R-22, R-32, R-123, R-124, R-125, R-134A, R-152A, nitrogen trifluoride, ammonia, hydrogen peroxide, and air. The program also provides the options of using any incompressible fluid with constant density and viscosity or ideal gas. The users can also supply property tables for fluids that are not in the library. Twenty-four different resistance/source options are provided for modeling momentum sources or sinks in the branches. These options include pipe flow, flow through a restriction, noncircular duct, pipe flow with entrance and/or exit losses, thin sharp orifice, thick orifice, square edge reduction, square edge expansion, rotating annular duct, rotating radial duct

  5. GLoBES: General Long Baseline Experiment Simulator

    NASA Astrophysics Data System (ADS)

    Huber, Patrick; Kopp, Joachim; Lindner, Manfred; Rolinec, Mark; Winter, Walter

    2007-09-01

    GLoBES (General Long Baseline Experiment Simulator) is a flexible software package to simulate neutrino oscillation long baseline and reactor experiments. On the one hand, it contains a comprehensive abstract experiment definition language (AEDL), which allows to describe most classes of long baseline experiments at an abstract level. On the other hand, it provides a C-library to process the experiment information in order to obtain oscillation probabilities, rate vectors, and Δχ-values. Currently, GLoBES is available for GNU/Linux. Since the source code is included, the port to other operating systems is in principle possible. GLoBES is an open source code that has previously been described in Computer Physics Communications 167 (2005) 195 and in Ref. [7]). The source code and a comprehensive User Manual for GLoBES v3.0.8 is now available from the CPC Program Library as described in the Program Summary below. The home of GLobES is http://www.mpi-hd.mpg.de/~globes/. Program summaryProgram title: GLoBES version 3.0.8 Catalogue identifier: ADZI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 145 295 No. of bytes in distributed program, including test data, etc.: 1 811 892 Distribution format: tar.gz Programming language: C Computer: GLoBES builds and installs on 32bit and 64bit Linux systems Operating system: 32bit or 64bit Linux RAM: Typically a few MBs Classification: 11.1, 11.7, 11.10 External routines: GSL—The GNU Scientific Library, www.gnu.org/software/gsl/ Nature of problem: Neutrino oscillations are now established as the leading flavor transition mechanism for neutrinos. In a long history of many experiments, see, e.g., [1], two oscillation frequencies have been identified: The fast atmospheric

  6. NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R827169)

    EPA Science Inventory

    General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...

  7. NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R825792)

    EPA Science Inventory

    General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...

  8. Generalized simulation technique for turbojet engine system analysis

    NASA Technical Reports Server (NTRS)

    Seldner, K.; Mihaloew, J. R.; Blaha, R. J.

    1972-01-01

    A nonlinear analog simulation of a turbojet engine was developed. The purpose of the study was to establish simulation techniques applicable to propulsion system dynamics and controls research. A schematic model was derived from a physical description of a J85-13 turbojet engine. Basic conservation equations were applied to each component along with their individual performance characteristics to derive a mathematical representation. The simulation was mechanized on an analog computer. The simulation was verified in both steady-state and dynamic modes by comparing analytical results with experimental data obtained from tests performed at the Lewis Research Center with a J85-13 engine. In addition, comparison was also made with performance data obtained from the engine manufacturer. The comparisons established the validity of the simulation technique.

  9. General approach to boat simulation in virtual reality systems

    NASA Astrophysics Data System (ADS)

    Aranov, Vladislav Y.; Belyaev, Sergey Y.

    2002-02-01

    The paper is dedicated to real time simulation of sport boats, particularly a kayak and high-speed skimming boat, for training goals. This training is issue of the day, since kayaking and riding a high-speed skimming boat are both extreme sports. Participating in such types of competitions puts sportsmen into danger, particularly due to rapids, waterfalls, different water streams, and other obstacles. In order to make the simulation realistic, it is necessary to calculate data for at least 30 frames per second. These calculations may take not more than 5% CPU time, because very time-consuming 3D rendering process takes the rest - 95% CPU time. This paper describes an approach for creating minimal boat simulator models that satisfy the mentioned requirements. Besides, this approach can be used for other watercraft models of this kind.

  10. Projectile General Motion in a Vacuum and a Spreadsheet Simulation

    ERIC Educational Resources Information Center

    Benacka, Jan

    2015-01-01

    This paper gives the solution and analysis of projectile motion in a vacuum if the launch and impact heights are not equal. Formulas for the maximum horizontal range and the corresponding angle are derived. An Excel application that simulates the motion is also presented, and the result of an experiment in which 38 secondary school students…

  11. Verifying Algorithms for Autonomous Aircraft by Simulation Generalities and Example

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2010-01-01

    An open question in Air Traffic Management is what procedures can be validated by simulation where the simulation shows that the probability of undesirable events is below the required level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first examines the probabilistic interpretation of a typical requirement by a regulatory agency and computes the number of trials needed to establish the requirement at an equivalent confidence level. Since any simulation is likely to consider only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. The paper establishes a separation algorithm at the required confidence level where the aircraft operates under feedback control as is subject to perturbations. There is a discussion where it is shown that a scenario three of four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.

  12. Generalized Maintenance Trainer Simulator: Development of Hardware and Software. Final Report.

    ERIC Educational Resources Information Center

    Towne, Douglas M.; Munro, Allen

    A general purpose maintenance trainer, which has the potential to simulate a wide variety of electronic equipments without hardware changes or new computer programs, has been developed and field tested by the Navy. Based on a previous laboratory model, the Generalized Maintenance Trainer Simulator (GMTS) is a relatively low cost trainer that…

  13. SimulaTEM: multislice simulations for general objects.

    PubMed

    Gómez-Rodríguez, A; Beltrán-Del-Río, L M; Herrera-Becerra, R

    2010-01-01

    In this work we present the program SimulaTEM for the simulation of high resolution micrographs and diffraction patterns. This is a program based on the multislice approach that does not assume a periodic object. It can calculate images from finite objects, from amorphous samples, from crystals, quasicrystals, grain boundaries, nanoparticles or arbitrary objects provided the coordinates of all the atoms can be supplied.

  14. SimGen: A General Simulation Method for Large Systems.

    PubMed

    Taylor, William R

    2017-02-03

    SimGen is a stand-alone computer program that reads a script of commands to represent complex macromolecules, including proteins and nucleic acids, in a structural hierarchy that can then be viewed using an integral graphical viewer or animated through a high-level application programming interface in C++. Structural levels in the hierarchy range from α-carbon or phosphate backbones through secondary structure to domains, molecules, and multimers with each level represented in an identical data structure that can be manipulated using the application programming interface. Unlike most coarse-grained simulation approaches, the higher-level objects represented in SimGen can be soft, allowing the lower-level objects that they contain to interact directly. The default motion simulated by SimGen is a Brownian-like diffusion that can be set to occur across all levels of representation in the hierarchy. Links can also be defined between objects, which, when combined with large high-level random movements, result in an effective search strategy for constraint satisfaction, including structure prediction from predicted pairwise distances. The implementation of SimGen makes use of the hierarchic data structure to avoid unnecessary calculation, especially for collision detection, allowing it to be simultaneously run and viewed on a laptop computer while simulating large systems of over 20,000 objects. It has been used previously to model complex molecular interactions including the motion of a myosin-V dimer "walking" on an actin fibre, RNA stem-loop packing, and the simulation of cell motion and aggregation. Several extensions to this original functionality are described.

  15. Generalized simulated tempering for exploring strong phase transitions.

    PubMed

    Kim, Jaegil; Straub, John E

    2010-10-21

    An extension of the simulation tempering algorithm is proposed. It is shown to be particularly suited to the exploration of first-order phase transition systems characterized by the backbending or S-loop in the statistical temperature or a microcanonical caloric curve. A guided Markov process in an auxiliary parameter space systematically combines a set of parametrized Tsallis-weight ensemble simulations, which are targeted to transform unstable or metastable energy states of canonical ensembles into stable ones and smoothly join ordered and disordered phases across phase transition regions via a succession of unimodal energy distributions. The inverse mapping between the sampling weight and the effective temperature enables an optimal selection of relevant Tsallis-weight parameters. A semianalytic expression for the biasing weight in parameter space is adaptively updated "on the fly" during the simulation to achieve rapid convergence. Accelerated tunneling transitions with a comprehensive sampling for phase-coexistent states are explicitly demonstrated in systems subject to strong hysteresis including Potts and Ising spin models and a 147 atom Lennard-Jones cluster.

  16. GENERAL REQUIREMENTS FOR SIMULATION MODELS IN WASTE MANAGEMENT

    SciTech Connect

    Miller, Ian; Kossik, Rick; Voss, Charlie

    2003-02-27

    Most waste management activities are decided upon and carried out in a public or semi-public arena, typically involving the waste management organization, one or more regulators, and often other stakeholders and members of the public. In these environments, simulation modeling can be a powerful tool in reaching a consensus on the best path forward, but only if the models that are developed are understood and accepted by all of the parties involved. These requirements for understanding and acceptance of the models constrain the appropriate software and model development procedures that are employed. This paper discusses requirements for both simulation software and for the models that are developed using the software. Requirements for the software include transparency, accessibility, flexibility, extensibility, quality assurance, ability to do discrete and/or continuous simulation, and efficiency. Requirements for the models that are developed include traceability, transparency, credibility/validity, and quality control. The paper discusses these requirements with specific reference to the requirements for performance assessment models that are used for predicting the long-term safety of waste disposal facilities, such as the proposed Yucca Mountain repository.

  17. 2 Gbit/s 0.5 μm complementary metal-oxide semiconductor optical transceiver with event-driven dynamic power-on capability

    NASA Astrophysics Data System (ADS)

    Wang, Xingle; Kiamilev, Fouad; Gui, Ping; Wang, Xiaoqing; Ekman, Jeremy; Zuo, Yongrong; Blankenberg, Jason; Haney, Michael

    2006-06-01

    A 2 Gb/s0.5 μm complementary metal-oxide semiconductor optical transceiver designed for board- or backplane level power-efficient interconnections is presented. The transceiver supports optical wake-on-link (OWL), an event-driven dynamic power-on technique. Depending on external events, the transceiver resides in either the active mode or the sleep mode and switches accordingly. The active-to-sleep transition shuts off the normal, gigabit link and turns on dedicated circuits to establish a low-power (~1.8 mW), low data rate (less than 100 Mbits/s) link. In contrast the normal, gigabit link consumes over 100 mW. Similarly the sleep-to-active transition shuts off the low-power link and turns on the normal, gigabit link. The low-power link, sharing the same optical channel with the normal, gigabit link, is used to achieve transmitter/receiver pair power-on synchronization and greatly reduces the power consumption of the transceiver. A free-space optical platform was built to evaluate the transceiver performance. The experiment successfully demonstrated the event-driven dynamic power-on operation. To our knowledge, this is the first time a dynamic power-on scheme has been implemented for optical interconnects. The areas of the circuits that implement the low-power link are approximately one-tenth of the areas of the gigabit link circuits.

  18. Projectile general motion in a vacuum and a spreadsheet simulation

    NASA Astrophysics Data System (ADS)

    Benacka, Jan

    2015-01-01

    This paper gives the solution and analysis of projectile motion in a vacuum if the launch and impact heights are not equal. Formulas for the maximum horizontal range and the corresponding angle are derived. An Excel application that simulates the motion is also presented, and the result of an experiment in which 38 secondary school students developed the application and investigated the system is given. A questionnaire survey was carried out to find out whether the students found the lessons interesting, learned new skills and wanted to model projectile motion in the air as an example of more realistic motion. The results are discussed.

  19. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  20. BIRD: A general interface for sparse distributed memory simulators

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.

  1. Plasma Jet Simulations Using a Generalized Ohm's Law

    NASA Technical Reports Server (NTRS)

    Ebersohn, Frans; Shebalin, John V.; Girimaji, Sharath S.

    2012-01-01

    Plasma jets are important physical phenomena in astrophysics and plasma propulsion devices. A currently proposed dual jet plasma propulsion device to be used for ISS experiments strongly resembles a coronal loop and further draws a parallel between these physical systems [1]. To study plasma jets we use numerical methods that solve the compressible MHD equations using the generalized Ohm s law [2]. Here, we will discuss the crucial underlying physics of these systems along with the numerical procedures we utilize to study them. Recent results from our numerical experiments will be presented and discussed.

  2. A Generalized Computer Simulation Language for Naval Systems Modeling.

    DTIC Science & Technology

    1981-06-30

    dependent variables change discretely at specified points in simulated time; for example, when a mine detonates , the number of mines in a minefield is...RETURN 1725. XrT=XL*DT 1726. REST (L) =RSET (L) + XDT 1727. RSET(LP1) =RSET(LP1) +XL* XDT 1728. RSET(LP5)=TNOW 1729. IF(NU.LE.0) RETURN 1730. WRITE(NU...TN0W-BSET(LP5) 756. IP(DT.LE.0) RETURN 1757. XDT = (XL+X) ’DT/2.I758. X2DT=(XL*XLsXX)*DT/2. 759. R!FT (L) =RSET (L) + XDT 1760. RSET (LP 1) RSET (LP1

  3. 3-D General Relativistic MHD Simulations of Generating Jets

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Koide, S.; Shibata, K.; Kudoh, T.; Sol, H.; Hughes, J. P.

    2001-12-01

    We have investigated the dynamics of an accretion disk around Schwarzschild black holes initially threaded by a uniform poloidal magnetic field in a non-rotating corona (either in a steady-state infalling state) around a non-rotating black hole using a 3-D GRMHD with the ``axisymmetry'' along the z-direction. Magnetic field is tightly twisted by the rotation of the disk, and plasmas in the shocked region of the disk are accelerated by J x B force to form bipolar relativistic jets. In order to investigate variabilities of generated relativistic jets and magnetic field structure inside jets, we have performed calculations using the 3-D GRMHD code with a full 3-dimensional system without the axisymmetry. We have investigated how the third dimension affects the global disk dynamics and jet generation. We will perform simulations with various incoming flows from an accompanying star.

  4. 3-D General Relativistic MHD Simulations of Generating Jets

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Koide, S.; Shibata, K.; Kudoh, T.; Frank, J.; Sol, H.

    1999-05-01

    Koide et al have investigated the dynamics of an accretion disk initially threaded by a uniform poloidal magnetic field in a non-rotating corona (either in a steady-state infalling state or in hydrostatic equilibrium) around a non-rotating black hole using a 3-D GRMHD with the ``axisymmetry'' along the z-direction. Magnetic field is tightly twisted by the rotation of the disk, and plasmas in the shocked region of the disk are accelerated by J x B force to form bipolar relativistic jets. In order to investigate variabilities of generated relativistic jets and magnetic field structure inside jets, we have performed calculations using the 3-D GRMHD code on a full 3-dimensional system. We will investigate how the third dimension affects the global disk dynamics. 3-D RMHD simulations wil be also performed to investigate the dynamics of a jet with a helical mangetic field in it.

  5. Jet Formation with 3-D General Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Richardson, G. A.; Nishikawa, K.-I.; Preece, R.; Hardee, P.; Koide, S.; Shibata, K.; Kudoh, T.; Sol, H.; Hughes, J. P.; Fishman, J.

    2002-12-01

    We have investigated the dynamics of an accretion disk around Schwarzschild black holes initially threaded by a uniform poloidal magnetic field in a non-rotating corona (in a steady-state infalling state) around a non-rotating black hole using 3-D GRMHD with the ``axisymmetry'' along the z-direction. The magnetic field is tightly twisted by the rotation of the accretion disk, and plasmas in the shocked region of the disk are accelerated by the J x B force to form bipolar relativistic jets. In order to investigate variabilities of generated relativistic jets and the magnetic field structure inside jets, we have performed calculations using the 3-D GRMHD code with a full 3-dimensional system without the axisymmetry. We have investigated how the third dimension affects the global disk dynamics and jet generation. We will perform simulations with various incoming flows from an accompanying star.

  6. 3-D General Relativistic MHD Simulations of Generating Jets

    NASA Astrophysics Data System (ADS)

    Nishikawa, Ken-Ichi; Koide, Shinji; Shibata, Kazunari; Kudoh, Takashiro; Sol, Helene; Hughes, John

    2002-04-01

    We have investigated the dynamics of an accretion disk around Schwarzschild black holes initially threaded by a uniform poloidal magnetic field in a non-rotating corona (either in a steady-state infalling state) around a non-rotating black hole using a 3-D GRMHD with the ``axisymmetry'' along the z-direction. Magnetic field is tightly twisted by the rotation of the disk, and plasmas in the shocked region of the disk are accelerated by J × B force to form bipolar relativistic jets. In order to investigate variabilities of generated relativistic jets and magnetic field structure inside jets, we have performed calculations using the 3-D GRMHD code with a full 3-dimensional system without the axisymmetry. We have investigated how the third dimension affects the global disk dynamics and jet generation. We will perform simulations with various incoming flows from an accompanying star.

  7. GOOSE, a generalized object-oriented simulation environment

    SciTech Connect

    Ford, C.E.; March-Leuba, C. ); Guimaraes, L.; Ugolini, D. . Dept. of Nuclear Engineering)

    1991-01-01

    GOOSE, prototype software for a fully interactive, object-oriented simulation environment, is being developed as part of the Advanced Controls Program at Oak Ridge National Laboratory. Dynamic models may easily be constructed and tested; fully interactive capabilities allow the user to alter model parameters and complexity without recompilation. This environment provides access to powerful tools, such as numerical integration packages, graphical displays, and online help. Portability has been an important design goal; the system was written in Objective-C in order to run on a wide variety of computers and operating systems, including UNIX workstations and personal computers. A detailed library of nuclear reactor components, currently under development, will also be described. 5 refs., 4 figs.

  8. Scaling of asymmetric magnetic reconnection: General theory and collisional simulations

    SciTech Connect

    Cassak, P. A.; Shay, M. A.

    2007-10-15

    A Sweet-Parker-type scaling analysis for asymmetric antiparallel reconnection (in which the reconnecting magnetic field strengths and plasma densities are different on opposite sides of the dissipation region) is performed. Scaling laws for the reconnection rate, outflow speed, the density of the outflow, and the structure of the dissipation region are derived from first principles. These results are independent of the dissipation mechanism. It is shown that a generic feature of asymmetric reconnection is that the X-line and stagnation point are not colocated, leading to a bulk flow of plasma across the X-line. The scaling laws are verified using two-dimensional resistive magnetohydrodynamics numerical simulations for the special case of asymmetric magnetic fields with symmetric density. Observational signatures and applications to reconnection in the magnetosphere are discussed.

  9. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D.; Kühn, Oliver

    2015-06-01

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.

  10. Plasma Jet Simulations Using a Generalized Ohm's Law

    NASA Astrophysics Data System (ADS)

    Ebersohn, F.; Shebalin, J. V.; Girimaji, S. S.

    2012-12-01

    Plasma jets are important physical phenomena in astrophysics and plasma propulsion devices. A currently proposed dual jet plasma propulsion device to be used for ISS experiments strongly resembles a coronal loop and further draws a parallel between these physical systems [1]. To study plasma jets we use numerical methods which solve the compressible MHD equations using the generalized Ohm's law[2]. Herein we discuss the crucial underlying physics of these systems along with the numerical procedures we utilize to study them. Recent results from our numerical experiments will be presented and discussed. [1] T. Glover, et al., The VASIMR® VF-200-1 ISS Experiment as a Laboratory for Astrophysics, Poster SM51C-1831, AGU Fall Meeting, San Francisco, December 13-17, 2010. [2] F. Ebersohn, J. V Shebalin, S. Girimaji and D. Staack, Magnetic Field Effects on Plasma Plumes, Paper O2-404, 39th EPS Conference on Plasma Physics, Stockholm, July 2-6, 2012.;

  11. Simulating extreme-mass-ratio systems in full general relativity

    NASA Astrophysics Data System (ADS)

    East, William E.; Pretorius, Frans

    2013-05-01

    We introduce a new method for numerically evolving the full Einstein field equations in situations where the spacetime is dominated by a known background solution. The technique leverages the knowledge of the background solution to subtract off its contribution to the truncation error, thereby more efficiently achieving a desired level of accuracy. We demonstrate the method by applying it to the radial infall of a solar-type star into supermassive black holes with mass ratios ≥106. The self-gravity of the star is thus consistently modeled within the context of general relativity, and the star’s interaction with the black hole computed with moderate computational cost, despite the over five orders of magnitude difference in gravitational potential (as defined by the ratio of mass to radius). We compute the tidal deformation of the star during infall, and the gravitational wave emission, finding the latter is close to the prediction of the point-particle limit.

  12. General relativistic magnetohydrodynamical simulations of the jet in M 87

    NASA Astrophysics Data System (ADS)

    Mościbrodzka, Monika; Falcke, Heino; Shiokawa, Hotaka

    2016-02-01

    Context. The connection between black hole, accretion disk, and radio jet can be constrained best by fitting models to observations of nearby low-luminosity galactic nuclei, in particular the well-studied sources Sgr A* and M 87. There has been considerable progress in modeling the central engine of active galactic nuclei by an accreting supermassive black hole coupled to a relativistic plasma jet. However, can a single model be applied to a range of black hole masses and accretion rates? Aims: Here we want to compare the latest three-dimensional numerical model, originally developed for Sgr A* in the center of the Milky Way, to radio observations of the much more powerful and more massive black hole in M 87. Methods: We postprocess three-dimensional GRMHD models of a jet-producing radiatively inefficient accretion flow around a spinning black hole using relativistic radiative transfer and ray-tracing to produce model spectra and images. As a key new ingredient in these models, we allow the proton-electron coupling in these simulations depend on the magnetic properties of the plasma. Results: We find that the radio emission in M 87 is described well by a combination of a two-temperature accretion flow and a hot single-temperature jet. Most of the radio emission in our simulations comes from the jet sheath. The model fits the basic observed characteristics of the M 87 radio core: it is "edge-brightened", starts subluminally, has a flat spectrum, and increases in size with wavelength. The best fit model has a mass-accretion rate of Ṁ ~ 9 × 10-3M⊙ yr-1 and a total jet power of Pj ~ 1043 erg s-1. Emission at λ = 1.3 mm is produced by the counter-jet close to the event horizon. Its characteristic crescent shape surrounding the black hole shadow could be resolved by future millimeter-wave VLBI experiments. Conclusions: The model was successfully derived from one for the supermassive black hole in the center of the Milky Way by appropriately scaling mass and

  13. Event-Driven Collaboration through Publish/Subscribe Messaging Services for Near-Real- Time Environmental Sensor Anomaly Detection and Management

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Downey, S.; Minsker, B.; Myers, J. D.; Wentling, T.; Marini, L.

    2006-12-01

    One of the challenges in designing cyberinfrastructure for national environmental observatories is how to provide integrated cyberenvironment which not only provides a standardized pipeline for streaming data from sensors into the observatory for archiving and distribution, but also makes raw data and identified events available in real-time for use in individual and group research efforts. This aspect of observatories is critical for promoting efficient collaboration and innovation among scientists and engineers and enabling observatories to serve as a focus that directly supports the broad community. The National Center for Supercomputing Applications' Environmental Cyberinfrastructure Demo (ECID) project has adopted an event-driven architecture and developed a CyberCollaboratory to facilitate event-driven, near-real-time collaboration and management of sensors and workflows for bringing data from environmental observatories into local research contexts. The CyberCollaboratory's event broker uses publish-subscribe service powered by JMS (Java Messaging Service) with semantics-enhanced messages using RDF (Resource Description Framework) triples to allow exchange of contextual information about the event between the event generators and the event consumers. Non-scheduled, event-driven collaboration effectively reduces the barrier to collaboration for scientists and engineers and promotes much faster turn-around time for critical environmental events. This is especially useful for real-time adaptive monitoring and modeling of sensor data in environmental observatories. In this presentation, we illustrate our system using a sensor anomaly detection event as an example where near-real- time data streams from field sensor in Corpus Christi Bay, Texas, trigger monitoring/anomaly alerts in the CyberCollaboratory's CyberDashboard and collaborative activities in the CyberCollaboratory. The CyberDashboard is a Java application where users can monitor various events

  14. A generalized framework for interactive dynamic simulation for MultiRigid bodies.

    PubMed

    Son, Wookho; Kim, Kyunghwan; Amato, Nancy M; Trinkle, Jeffrey C

    2004-04-01

    This paper presents a generalized framework for dynamic simulation realized in a prototype simulator called the Interactive Generalized Motion Simulator (I-GMS), which can simulate motions of multirigid-body systems with contact interaction in virtual environments. I-GMS is designed to meet two important goals: generality and interactivity. By generality, we mean a dynamic simulator which can easily support various systems of rigid bodies, ranging from a single free-flying rigid object to complex linkages such as those needed for robotic systems or human body simulation. To provide this generality, we have developed I-GMS in an object-oriented framework. The user interactivity is supported through a haptic interface for articulated bodies, introducing interactive dynamic simulation schemes. This user-interaction is achieved by performing push and pull operations via the PHANToM haptic device, which runs as an integrated part of I-GMS. Also, a hybrid scheme was used for simulating internal contacts (between bodies in the multirigid-body system) in the presence of friction, which could avoid the nonexistent solution problem often faced when solving contact problems with Coulomb friction. In our hybrid scheme, two impulse-based methods are exploited so that different methods are applied adaptively, depending on whether the current contact situation is characterized as "bouncing" or "steady." We demonstrate the user-interaction capability of I-GMS through on-line editing of trajectories of a 6-degree of freedom (dof) articulated structure.

  15. No Vent Tank Fill and Transfer Line Chilldown Analysis by Generalized Fluid System Simulation Program (GFSSP)

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    2013-01-01

    The purpose of the paper is to present the analytical capability developed to model no vent chill and fill of cryogenic tank to support CPST (Cryogenic Propellant Storage and Transfer) program. Generalized Fluid System Simulation Program (GFSSP) was adapted to simulate charge-holdvent method of Tank Chilldown. GFSSP models were developed to simulate chilldown of LH2 tank in K-site Test Facility and numerical predictions were compared with test data. The report also describes the modeling technique of simulating the chilldown of a cryogenic transfer line and GFSSP models were developed to simulate the chilldown of a long transfer line and compared with test data.

  16. Instructor and student pilots' subjective evaluation of a general aviation simulator with a terrain visual system

    NASA Technical Reports Server (NTRS)

    Kiteley, G. W.; Harris, R. L., Sr.

    1978-01-01

    Ten student pilots were given a 1 hour training session in the NASA Langley Research Center's General Aviation Simulator by a certified flight instructor and a follow-up flight evaluation was performed by the student's own flight instructor, who has also flown the simulator. The students and instructors generally felt that the simulator session had a positive effect on the students. They recommended that a simulator with a visual scene and a motion base would be useful in performing such maneuvers as: landing approaches, level flight, climbs, dives, turns, instrument work, and radio navigation, recommending that the simulator would be an efficient means of introducing the student to new maneuvers before doing them in flight. The students and instructors estimated that about 8 hours of simulator time could be profitably devoted to the private pilot training.

  17. General specifications for the development of a PC-based simulator of the NASA RECON system

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    The general specifications for the design and implementation of an IBM PC/XT-based simulator of the NASA RECON system, including record designs, file structure designs, command language analysis, program design issues, error recovery considerations, and usage monitoring facilities are discussed. Once implemented, such a simulator will be utilized to evaluate the effectiveness of simulated information system access in addition to actual system usage as part of the total educational programs being developed within the NASA contract.

  18. A General Simulator Using State Estimation for a Space Tug Navigation System. [computerized simulation, orbital position estimation and flight mechanics

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1975-01-01

    A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.

  19. General purpose simulation system of the data management system for space shuttle mission 18

    NASA Technical Reports Server (NTRS)

    Bengtson, N. M.; Mellichamp, J. M.; Crenshaw, J.

    1975-01-01

    The simulation program of the science and engineering data management system for the space shuttle is presented. The programming language used was General Purpose Simulation System V (OS). The data flow was modeled from its origin at the experiments or subsystems to transmission from the space shuttle. Mission 18 was the particular flight chosen for simulation. First, the general structure of the program is presented and the trade studies which were performed are identified. Inputs required to make runs are discussed followed by identification of the output statistics. Some areas for model modifications are pointed out. A detailed model configuration, program listing and results are included.

  20. A Multi-mission Event-Driven Component-Based System for Support of Flight Software Development, ATLO, and Operations first used by the Mars Science Laboratory (MSL) Project

    NASA Technical Reports Server (NTRS)

    Dehghani, Navid; Tankenson, Michael

    2006-01-01

    This paper details an architectural description of the Mission Data Processing and Control System (MPCS), an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is developed based on a set of small reusable components, implemented in Java, each designed with a specific function and well-defined interfaces. An industry standard messaging bus is used to transfer information among system components. Components generate standard messages which are used to capture system information, as well as triggers to support the event-driven architecture of the system. Event-driven systems are highly desirable for processing high-rate telemetry (science and engineering) data, and for supporting automation for many mission operations processes.

  1. A general CellML simulation code generator using ODE solving scheme description.

    PubMed

    Amano, Akira; Soejima, Naoki; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2011-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify target computation resources or numerical calculation methods or simulation conditions. Typical biological function simulation software consists of 1) model equation, 2) boundary conditions and 3) ODE solving scheme. Introducing the description model file such as CellML is useful for generalizing the first point and partly second point, however, third point is difficult to handle. We introduce a simulation software generation system which use markup language based description of ODE solving scheme together with cell model description file. By using this software, we can easily generate biological simulation program code with different ODE solving schemes. To show the efficiency of our system, experimental results of several simulation models with different ODE scheme and different computation resources are shown.

  2. Generalized image charge solvation model for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    NASA Astrophysics Data System (ADS)

    Deng, Shaozhong; Xue, Changfeng; Baumketner, Andriy; Jacobs, Donald; Cai, Wei

    2013-07-01

    This paper extends the image charge solvation model (ICSM) [Y. Lin, A. Baumketner, S. Deng, Z. Xu, D. Jacobs, W. Cai, An image-based reaction field method for electrostatic interactions in molecular dynamics simulations of aqueous solutions, J. Chem. Phys. 131 (2009) 154103], a hybrid explicit/implicit method to treat electrostatic interactions in computer simulations of biomolecules formulated for spherical cavities, to prolate spheroidal and triaxial ellipsoidal cavities, designed to better accommodate non-spherical solutes in molecular dynamics (MD) simulations. In addition to the utilization of a general truncated octahedron as the MD simulation box, central to the proposed extension is an image approximation method to compute the reaction field for a point charge placed inside such a non-spherical cavity by using a single image charge located outside the cavity. The resulting generalized image charge solvation model (GICSM) is tested in simulations of liquid water, and the results are analyzed in comparison with those obtained from the ICSM simulations as a reference. We find that, for improved computational efficiency due to smaller simulation cells and consequently a less number of explicit solvent molecules, the generalized model can still faithfully reproduce known static and dynamic properties of liquid water at least for systems considered in the present paper, indicating its great potential to become an accurate but more efficient alternative to the ICSM when bio-macromolecules of irregular shapes are to be simulated.

  3. Simulation of the great plains low-level jet and associated clouds by general circulation models

    SciTech Connect

    Ghan, S.J.; Bian, X.; Corsetti, L.

    1996-07-01

    The low-level jet frequently observed in the Great Plains of the United States forms preferentially at night and apparently influences the timing of the thunderstorms in the region. The authors have found that both the European Centre for Medium-Range Weather Forecasts general circulation model and the National Center for Atmospheric Research Community Climate Model simulate the low-level jet rather well, although the spatial distribution of the jet frequency simulated by the two GCM`s differ considerably. Sensitivity experiments have demonstrated that the simulated low-level jet is surprisingly robust, with similar simulations at much coarser horizontal and vertical resolutions. However, both GCM`s fail to simulate the observed relationship between clouds and the low-level jet. The pronounced nocturnal maximum in thunderstorm frequency associated with the low-level jet is not simulated well by either GCM, with only weak evidence of a nocturnal maximum in the Great Plains. 36 refs., 20 figs.

  4. The Generalized Onsager Model and DSMC Simulations of High-Speed Rotating Flow with Swirling Feed

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev

    2017-01-01

    The generalized Onsager model for the radial boundary layer and of the generalized Carrier-Maslen model for the axial boundary layer at the end-caps in a high-speed rotating cylinder, are extended to incorporate the angular momentum of the feed gas for a swirling feed for single component gas and binary gas mixture. For a single component gas, the analytical solutions are obtained for the sixth-order generalized Onsager equations for the master potential, and for the fourth-order generalized Carrier-Maslen equation for the velocity potential. In both cases, the equations are linearized in the perturbation to the base flow, which is a solid-body rotation. The equations are restricted to the limit of high Reynolds number and (length/radius) ratio, but there is no limitation on the stratification parameter. The linear operators in the generalized Onsager and generalized Carrier-Maslen equations with swirling feed are still self-adjoint, and so the eigenfunctions form a complete orthogonal basis set. The analytical solutions are compared with direct simulation Monte Carlo (DSMC) simulations. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 15%.

  5. The Generalized Onsager Model and DSMC Simulations of High-Speed Rotating Flow with Swirling Feed

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev

    2016-09-01

    The generalized Onsager model for the radial boundary layer and of the generalized Carrier-Maslen model for the axial boundary layer at the end-caps in a high-speed rotating cylinder, are extended to incorporate the angular momentum of the feed gas for a swirling feed for single component gas and binary gas mixture. For a single component gas, the analytical solutions are obtained for the sixth-order generalized Onsager equations for the master potential, and for the fourth-order generalized Carrier-Maslen equation for the velocity potential. In both cases, the equations are linearized in the perturbation to the base flow, which is a solid-body rotation. The equations are restricted to the limit of high Reynolds number and (length/radius) ratio, but there is no limitation on the stratification parameter. The linear operators in the generalized Onsager and generalized Carrier-Maslen equations with swirling feed are still self-adjoint, and so the eigenfunctions form a complete orthogonal basis set. The analytical solutions are compared with direct simulation Monte Carlo (DSMC) simulations. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 15%.

  6. The Generalized Onsager Model and DSMC Simulations of High-Speed Rotating Flow with Swirling Feed

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev, , Dr.

    2016-11-01

    The generalized Onsager model for the radial boundary layer and of the generalized Carrier-Maslen model for the axial boundary layer at the end-caps in a high-speed rotating cylinder, are extended to incorporate the angular momentum of the feed gas for a swirling feed for single component gas and binary gas mixture. For a single component gas, the analytical solutions are obtained for the sixth-order generalized Onsager equations for the master potential, and for the fourth-order generalized Carrier-Maslen equation for the velocity potential. In both cases, the equations are linearized in the perturbation to the base flow, which is a solid-body rotation. The equations are restricted to the limit of high Reynolds number and (length/radius) ratio, but there is no limitation on the stratification parameter. The linear operators in the generalized Onsager and generalized Carrier-Maslen equations with swirling feed are still self-adjoint, and so the eigenfunctions form a complete orthogonal basis set. The analytical solutions are compared with direct simulation Monte Carlo (DSMC) simulations. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 15%.

  7. General Relativistic Radiative Transfer and GeneralRelativistic MHD Simulations of Accretion and Outflows of Black Holes

    SciTech Connect

    Fuerst, Steven V.; Mizuno, Yosuke; Nishikawa, Ken-Ichi; Wu, Kinwah; /Mullard Space Sci. Lab.

    2007-01-05

    We calculate the emission from relativistic flows in black hole systems using a fully general relativistic radiative transfer formulation, with flow structures obtained by general relativistic magneto-hydrodynamic simulations. We consider thermal free-free emission and thermal synchrotron emission. Bright filament-like features protrude (visually) from the accretion disk surface, which are enhancements of synchrotron emission where the magnetic field roughly aligns with the line-of-sight in the co-moving frame. The features move back and forth as the accretion flow evolves, but their visibility and morphology are robust. We propose that variations and drifts of the features produce certain X-ray quasi-periodic oscillations (QPOs) observed in black-hole X-ray binaries.

  8. General purpose simulation system of the data management system for Space Shuttle mission 18

    NASA Technical Reports Server (NTRS)

    Bengtson, N. M.; Mellichamp, J. M.; Smith, O. C.

    1976-01-01

    A simulation program for the flow of data through the Data Management System of Spacelab and Space Shuttle was presented. The science, engineering, command and guidance, navigation and control data were included. The programming language used was General Purpose Simulation System V (OS). The science and engineering data flow was modeled from its origin at the experiments and subsystems to transmission from Space Shuttle. Command data flow was modeled from the point of reception onboard and from the CDMS Control Panel to the experiments and subsystems. The GN&C data flow model handled data between the General Purpose Computer and the experiments and subsystems. Mission 18 was the particular flight chosen for simulation. The general structure of the program is presented, followed by a user's manual. Input data required to make runs are discussed followed by identification of the output statistics. The appendices contain a detailed model configuration, program listing and results.

  9. General circulation model simulations of winter and summer sea-level pressures over North America

    USGS Publications Warehouse

    McCabe, G.J.; Legates, D.R.

    1992-01-01

    In this paper, observed sea-level pressures were used to evaluate winter and summer sea-level pressures over North America simulated by the Goddard Institute for Space Studies (GISS) and the Geophysical Fluid Dynamics Laboratory (GFDL) general circulation models. The objective of the study is to determine how similar the spatial and temporal distributions of GCM-simulated daily sea-level pressures over North America are to observed distributions. Overall, both models are better at reproducing observed within-season variance of winter and summer sea-level pressures than they are at simulating the magnitude of mean winter and summer sea-level pressures. -from Authors

  10. Generalized source method in curvilinear coordinates for 2D grating diffraction simulation

    NASA Astrophysics Data System (ADS)

    Shcherbakov, Alexey A.; Tishchenko, Alexandre V.

    2017-01-01

    The article presents a curvilinear coordinate Fourier space integral method for linear optical rigorous grating diffraction simulation in 3D (crossed grating diffraction). The presented formulation extends our previous work on a related method for 1D periodic grating diffraction. Following this previous work we exploit a concept of the generalized metric sources to efficiently solve the Maxwell's equations. The article provides a general description of the method together with a detailed formulation and analysis of sinusoidal corrugation crossed grating diffraction.

  11. The General-Use Nodal Network Solver (GUNNS) Modeling Package for Space Vehicle Flow System Simulation

    NASA Technical Reports Server (NTRS)

    Harvey, Jason; Moore, Michael

    2013-01-01

    The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.

  12. Development and evaluation of a general aviation real world noise simulator

    NASA Technical Reports Server (NTRS)

    Galanter, E.; Popper, R.

    1980-01-01

    An acoustic playback system is described which realistically simulates the sounds experienced by the pilot of a general aviation aircraft during engine idle, take-off, climb, cruise, descent, and landing. The physical parameters of the signal as they appear in the simulator environment are compared to analogous parameters derived from signals recorded during actual flight operations. The acoustic parameters of the simulated and real signals during cruise conditions are within plus or minus two dB in third octave bands from 0.04 to 4 kHz. The overall A-weighted levels of the signals are within one dB of signals generated in the actual aircraft during equivalent maneuvers. Psychoacoustic evaluations of the simulator signal are compared with similar measurements based on transcriptions of actual aircraft signals. The subjective judgments made by human observers support the conclusion that the simulated sound closely approximates transcribed sounds of real aircraft.

  13. Generalized Canonical Correlation Analysis of Matrices with Missing Rows: A Simulation Study

    ERIC Educational Resources Information Center

    van de Velden, Michel; Bijmolt, Tammo H. A.

    2006-01-01

    A method is presented for generalized canonical correlation analysis of two or more matrices with missing rows. The method is a combination of Carroll's (1968) method and the missing data approach of the OVERALS technique (Van der Burg, 1988). In a simulation study we assess the performance of the method and compare it to an existing procedure…

  14. Estimating plant available water for general crop simulations in ALMANAC/APEX/EPIC/SWAT

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Process-based simulation models ALMANAC/APEX/EPIC/SWAT contain generalized plant growth subroutines to predict biomass and crop yield. Environmental constraints typically restrict plant growth and yield. Water stress is often an important limiting factor; it is calculated as the sum of water use f...

  15. Computer considerations for real time simulation of a generalized rotor model

    NASA Technical Reports Server (NTRS)

    Howe, R. M.; Fogarty, L. E.

    1977-01-01

    Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.

  16. Energetics analysis of the observed and simulated general circulation using three-dimensional normal mode expansions

    NASA Technical Reports Server (NTRS)

    Tanaka, Hiroshi; Kung, Ernest C.; Baker, Wayman E.

    1986-01-01

    The energetics characteristics of the observed and simulated general circulation are analyzed using three-dimensional normal mode expansions. The data sets involved are the Goddard Laboratory for Atmospheres (GLA) analysis and simulation data and the Geophysical Fluid Dynamics Laboratory (GFDL) analysis data. The spectral energy properties of the Rossby and gravity modes and energy transformations are presented. Significant influences of model characteristics and the assimilation techniques are observed in the barotropic energy spectrum, particularly for the gravity mode. Energy transformations of the zonal mean field in the GLA analysis and simulation are similar, but distinctly different from that in the GFDL analysis. However, overall, the energy generation in the baroclinic mode is largely balanced by the sink in the barotropic mode. The present study may demonstrate utilities of the three-dimensional normal mode energetics in the analysis of the general circulation.

  17. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  18. Semi-analytical solution for the generalized absorbing boundary condition in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Lee, Chung-Shuo; Chen, Yan-Yu; Yu, Chi-Hua; Hsu, Yu-Chuan; Chen, Chuin-Shan

    2017-02-01

    We present a semi-analytical solution of a time-history kernel for the generalized absorbing boundary condition in molecular dynamics (MD) simulations. To facilitate the kernel derivation, the concept of virtual atoms in real space that can conform with an arbitrary boundary in an arbitrary lattice is adopted. The generalized Langevin equation is regularized using eigenvalue decomposition and, consequently, an analytical expression of an inverse Laplace transform is obtained. With construction of dynamical matrices in the virtual domain, a semi-analytical form of the time-history kernel functions for an arbitrary boundary in an arbitrary lattice can be found. The time-history kernel functions for different crystal lattices are derived to show the generality of the proposed method. Non-equilibrium MD simulations in a triangular lattice with and without the absorbing boundary condition are conducted to demonstrate the validity of the solution.

  19. Gyrokinetic particle simulation of microturbulence for general magnetic geometry and experimental profiles

    SciTech Connect

    Xiao, Yong; Holod, Ihor; Wang, Zhixuan; Lin, Zhihong; Zhang, Taige

    2015-02-15

    Developments in gyrokinetic particle simulation enable the gyrokinetic toroidal code (GTC) to simulate turbulent transport in tokamaks with realistic equilibrium profiles and plasma geometry, which is a critical step in the code–experiment validation process. These new developments include numerical equilibrium representation using B-splines, a new Poisson solver based on finite difference using field-aligned mesh and magnetic flux coordinates, a new zonal flow solver for general geometry, and improvements on the conventional four-point gyroaverage with nonuniform background marker loading. The gyrokinetic Poisson equation is solved in the perpendicular plane instead of the poloidal plane. Exploiting these new features, GTC is able to simulate a typical DIII-D discharge with experimental magnetic geometry and profiles. The simulated turbulent heat diffusivity and its radial profile show good agreement with other gyrokinetic codes. The newly developed nonuniform loading method provides a modified radial transport profile to that of the conventional uniform loading method.

  20. The Tropical Subseasonal Variability Simulated in the NASA GISS General Circulation Model

    NASA Technical Reports Server (NTRS)

    Kim, Daehyun; Sobel, Adam H.; DelGenio, Anthony D.; Chen, Yonghua; Camargo, Suzana J.; Yao, Mao-Sung; Kelley, Maxwell; Nazarenko, Larissa

    2012-01-01

    The tropical subseasonal variability simulated by the Goddard Institute for Space Studies general circulation model, Model E2, is examined. Several versions of Model E2 were developed with changes to the convective parameterization in order to improve the simulation of the Madden-Julian oscillation (MJO). When the convective scheme is modified to have a greater fractional entrainment rate, Model E2 is able to simulate MJO-like disturbances with proper spatial and temporal scales. Increasing the rate of rain reevaporation has additional positive impacts on the simulated MJO. The improvement in MJO simulation comes at the cost of increased biases in the mean state, consistent in structure and amplitude with those found in other GCMs when tuned to have a stronger MJO. By reinitializing a relatively poor-MJO version with restart files from a relatively better-MJO version, a series of 30-day integrations is constructed to examine the impacts of the parameterization changes on the organization of tropical convection. The poor-MJO version with smaller entrainment rate has a tendency to allow convection to be activated over a broader area and to reduce the contrast between dry and wet regimes so that tropical convection becomes less organized. Besides the MJO, the number of tropical-cyclone-like vortices simulated by the model is also affected by changes in the convection scheme. The model simulates a smaller number of such storms globally with a larger entrainment rate, while the number increases significantly with a greater rain reevaporation rate.

  1. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg–Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  2. The Early Jurassic climate: General circulation model simulations and the paleoclimate record

    SciTech Connect

    Chandler, M.A.

    1992-01-01

    This thesis presents the results of several general circulation model simulations of the Early Jurassic climate. The general circulation model employed was developed at the Goddard Institute for Space Studies while most paleoclimate data were provided by the Paleographic Atlas Project of the University of Chicago. The first chapter presents an Early Jurassic base simulation, which uses detailed reconstructions of paleogeography, vegetation, and sea surface temperature as boundary condition data sets. The resulting climatology reveals an Earth 5.2[degrees]C warmer, globally, than at present and a latitudinal temperature gradient dominated by high-latitude warming (+20[degrees]C) and little tropical change (+1[degrees]C). Comparisons show a good correlation between simulated results and paleoclimate data. Sensitivity experiments are used to investigate any model-data mismatches. Chapters two and three discuss two important aspects of Early Jurassic climate, continental aridity and global warming. Chapter two focuses on the hydrological capabilities of the general circulation model. The general circulation model's hydrologic diagnostics are evaluated, using the distribution of modern deserts and Early Jurassic paleoclimate data as validating constraints. A new method, based on general circulation model diagnostics and empirical formulae, is proposed for evaluating moisture balance. Chapter three investigates the cause of past global warming, concentrating on the role of increased ocean heat transport. Early Jurassic simulations show that increased ocean heat transports may have been a major factor in past climates. Increased ocean heat transports create latitudinal temperature gradients that closely approximate paleoclimate data and solve the problem of tropical overheating that results from elevated atmospheric carbon dioxide. Increased carbon dioxide cannot duplicate the Jurassic climate without also including increased ocean heat transports.

  3. General-relativistic simulations of binary black hole-neutron stars: Precursor electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Paschalidis, Vasileios; Etienne, Zachariah B.; Shapiro, Stuart L.

    2013-07-01

    We perform the first general relativistic force-free simulations of neutron star magnetospheres in orbit about spinning and nonspinning black holes. We find promising precursor electromagnetic emission: typical Poynting luminosities at, e.g., an orbital separation of r=6.6RNS are LEM˜6×1042(BNS,p/1013G)2(MNS/1.4M⊙)2erg/s. The Poynting flux peaks within a broad beam of ˜40° in the azimuthal direction and within ˜60° from the orbital plane, establishing a possible lighthouse effect. Our calculations, though preliminary, preview more detailed simulations of these systems that we plan to perform in the future.

  4. Generalized math model for simulation of high-altitude balloon systems

    NASA Technical Reports Server (NTRS)

    Nigro, N. J.; Elkouh, A. F.; Hinton, D. E.; Yang, J. K.

    1985-01-01

    Balloon systems have proved to be a cost-effective means for conducting research experiments (e.g., infrared astronomy) in the earth's atmosphere. The purpose of this paper is to present a generalized mathematical model that can be used to simulate the motion of these systems once they have attained float altitude. The resulting form of the model is such that the pendulation and spin motions of the system are uncoupled and can be analyzed independently. The model is evaluated by comparing the simulation results with data obtained from an actual balloon system flown by NASA.

  5. Simulating of the measurement-device independent quantum key distribution with phase randomized general sources

    PubMed Central

    Wang, Qin; Wang, Xiang-Bin

    2014-01-01

    We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000

  6. Cloud-radiative effects on implied oceanic energy transports as simulated by atmospheric general circulation models

    SciTech Connect

    Gleckler, P.J.; Randall, D.A.; Boer, G.

    1994-03-01

    This paper reports on energy fluxes across the surface of the ocean as simulated by fifteen atmospheric general circulation models in which ocean surface temperatures and sea-ice boundaries are prescribed. The oceanic meridional energy transport that would be required to balance these surface fluxes is computed, and is shown to be critically sensitive to the radiative effects of clouds, to the extent that even the sign of the Southern Hemisphere ocean energy transport can be affected by the errors in simulated cloud-radiation interactions.

  7. Gyrokinetic simulations in general geometry and applications to collisional damping of zonal flows

    SciTech Connect

    Lin, Z.; Hahm, T.S.; Lee, W.W.; Tang, W.M.; White, R.B.

    2000-02-15

    A fully three-dimensional gyrokinetic particle code using magnetic coordinates for general geometry has been developed and applied to the investigation of zonal flows dynamics in toroidal ion-temperature-gradient turbulence. Full torus simulation results support the important conclusion that turbulence-driven zonal flows significantly reduce the turbulent transport. Linear collisionless simulations for damping of an initial poloidal flow perturbation exhibit an asymptotic residual flow. The collisional damping of this residual causes the dependence of ion thermal transport on the ion-ion collision frequency even in regimes where the instabilities are collisionless.

  8. A generalized mathematical framework for stochastic simulation and forecast of hydrologic time series

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, Demetris

    2000-02-01

    A generalized framework for single-variate and multivariate simulation and forecasting problems in stochastic hydrology is proposed. It is appropriate for short-term or long-term memory processes and preserves the Hurst coefficient even in multivariate processes with a different Hurst coefficient in each location. Simultaneously, it explicitly preserves the coefficients of skewness of the processes. The proposed framework incorporates short-memory (autoregressive moving average) and long-memory (fractional Gaussian noise) models, considering them as special instances of a parametrically defined generalized autocovariance function, more comprehensive than those used in these classes of models. The generalized autocovariance function is then implemented in a generalized moving average generating scheme that yields a new time-symmetric (backward-forward) representation, whose advantages are studied. Fast algorithms for computation of internal parameters of the generating scheme are developed, appropriate for problems including even thousands of such parameters. The proposed generating scheme is also adapted through a generalized methodology to perform in forecast mode, in addition to simulation mode. Finally, a specific form of the model for problems where the autocorrelation function can be defined only for a certain finite number of lags is also studied. Several illustrations are included to clarify the features and the performance of the components of the proposed framework.

  9. Generalized-ensemble simulations of the human parathyroid hormone fragment PTH(1-34)

    NASA Astrophysics Data System (ADS)

    Hansmann, Ulrich H. E.

    2004-01-01

    A generalized-ensemble technique, multicanonical sampling, is used to study the folding of a 34-residue human parathyroid hormone fragment. An all-atom model of the peptide is employed and the protein-solvent interactions are approximated by an implicit solvent. Our results demonstrate that generalized-ensemble simulations are well suited to sample low-energy structures of such large polypeptides. Configurations with a root-mean-square deviation to the crystal structure of less than 1 Å are found. Finally, we discuss limitations of our implicit solvent model.

  10. Effects of cumulus convection on the simulated monsoon circulation in a general circulation model

    SciTech Connect

    Zhang, Guang Jun )

    1994-09-01

    The effect of cumulus convection on the Asian summer monsoon circulation is investigated, using a general circulation model. Two simulations for the summer months (June, July, and August) are performed, one parameterizing convection using a mass flux scheme and the other without convective parameterization. The results show that convection has significant effects on the monsoon circulation and its associated precipitation. In the simulation with the mass flux convective parameterization, precipitation in the western Pacific is decreased, together with a decrease in surface evaporation and wind speed. In the indian monsoon region it is almost the opposite. Comparison with a simulation using moist convective adjustment to parameterize convection shows that the monsoon circulation and precipitation distribution in the no-convection simulation are very similar to those in the simulation with moist convective adjustment. The difference in the large-scale circulation with and without convective parameterization is interpreted in terms of convective stabilization of the atmosphere by convection, using dry and moist static energy budgets. It is shown that weakening of the low-level convergence in the western Pacific in the simulation with convection is closely associated with the stabilization of the atmosphere by convection, mostly through drying of the lower troposphere; changes in low-level convergence lead to changes in precipitation. The precipitation increase in the Indian monsoon can be explained similarly. 29 refs., 12 figs.

  11. A General Relativistic Magnetohydrodynamics Simulation of Jet Formation with a State Transition

    NASA Technical Reports Server (NTRS)

    Nishikawa, K. I.; Richardson, G.; Koide, S.; Shibata, K.; Kudoh, T.; Hardee, P.; Fushman, G. J.

    2004-01-01

    We have performed the first fully three-dimensional general relativistic magnetohydrodynamic (GRMHD) simulation of jet formation from a thin accretion disk around a Schwarzschild black hole with a free-falling corona. The initial simulation results show that a bipolar jet (velocity sim 0.3c) is created as shown by previous two-dimensional axisymmetric simulations with mirror symmetry at the equator. The 3-D simulation ran over one hundred light-crossing time units which is considerably longer than the previous simulations. We show that the jet is initially formed as predicted due in part to magnetic pressure from the twisting the initially uniform magnetic field and from gas pressure associated with shock formation. At later times, the accretion disk becomes thick and the jet fades resulting in a wind that is ejected from the surface of the thickened (torus-like) disk. It should be noted that no streaming matter from a donor is included at the outer boundary in the simulation (an isolated black hole not binary black hole). The wind flows outwards with a wider angle than the initial jet. The widening of the jet is consistent with the outward moving shock wave. This evolution of jet-disk coupling suggests that the low/hard state of the jet system may switch to the high/soft state with a wind, as the accretion rate diminishes.

  12. General relativistic simulations of compact binary mergers as engines for short gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Paschalidis, Vasileios

    2017-04-01

    Black hole—neutron star (BHNS) and neutron star—neutron star (NSNS) binaries are among the favored candidates for the progenitors of the black hole—disk systems that may be the engines powering short-hard gamma ray bursts. After almost two decades of simulations of binary NSNSs and BHNSs in full general relativity we are now beginning to understand the ingredients that may be necessary for these systems to launch incipient jets. Here, we review our current understanding, and summarize the surprises and lessons learned from state-of-the-art (magnetohydrodynamic) simulations in full general relativity of BHNS and NSNS mergers as jet engines for short-hard gamma-ray bursts. We also propose a new approach to probing the nuclear equation of state by virtue of multimessenger observations.

  13. Well-posedness and generalized plane waves simulations of a 2D mode conversion model

    SciTech Connect

    Imbert-Gérard, Lise-Marie

    2015-12-15

    Certain types of electro-magnetic waves propagating in a plasma can undergo a mode conversion process. In magnetic confinement fusion, this phenomenon is very useful to heat the plasma, since it permits to transfer the heat at or near the plasma center. This work focuses on a mathematical model of wave propagation around the mode conversion region, from both theoretical and numerical points of view. It aims at developing, for a well-posed equation, specific basis functions to study a wave mode conversion process. These basis functions, called generalized plane waves, are intrinsically based on variable coefficients. As such, they are particularly adapted to the mode conversion problem. The design of generalized plane waves for the proposed model is described in detail. Their implementation within a discontinuous Galerkin method then provides numerical simulations of the process. These first 2D simulations for this model agree with qualitative aspects studied in previous works.

  14. Simulator Evaluation of Runway Incursion Prevention Technology for General Aviation Operations

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Prinzel, Lawrence J., III

    2011-01-01

    A Runway Incursion Prevention System (RIPS) has been designed under previous research to enhance airport surface operations situation awareness and provide cockpit alerts of potential runway conflict, during transport aircraft category operations, in order to prevent runway incidents while also improving operations capability. This study investigated an adaptation of RIPS for low-end general aviation operations using a fixed-based simulator at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the study was to evaluate modified RIPS aircraft-based incursion detection algorithms and associated alerting and airport surface display concepts for low-end general aviation operations. This paper gives an overview of the system, simulation study, and test results.

  15. DL_POLY_2.0: a general-purpose parallel molecular dynamics simulation package.

    PubMed

    Smith, W; Forester, T R

    1996-06-01

    DL_POLY_2.0 is a general-purpose parallel molecular dynamics simulation package developed at Daresbury Laboratory under the auspices of the Council for the Central Laboratory of the Research Councils. Written to support academic research, it has a wide range of applications and is designed to run on a wide range of computers: from single processor workstations to parallel supercomputers. Its structure, functionality, performance, and availability are described.

  16. Greenhouse gas-induced climate change simulated with the CCS second-generation general circulation model

    SciTech Connect

    Boer, G.J.; Mcfarlane, N.A.; Lazare, M. )

    1992-10-01

    The Canadian Climate Centre second-generation atmospheric general circulation model coupled to a mixed-layer ocean incorporating thermodynamic sea ice is used to simulate the equilibrium climate response to a doubling of CO[sub 2]. The results of the simulation indicate a global annual warming of 3.5 C with enhanced warming found over land and at higher latitudes. Precipitation and evaporation rates increase by about 4 percent, and cloud cover decreases by 2.2 percent. Soil moisture decreases over continental Northern Hemisphere land areas in summer. The frozen component of soil moisture decreases and the liquid component increases in association with the increase of temperature at higher latitudes. The simulated accumulation rate of permanent snow cover decreases markedly over Greenland and increases slightly over Antarctica. Seasonal snow and sea ice boundaries retreat, but local decreases in planetary albedo are counteracted by tropical increases, so there is little change in the global average. 39 refs.

  17. Simulation of charge breeding of rubidium using Monte Carlo charge breeding code and generalized ECRIS model

    SciTech Connect

    Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.

    2010-02-15

    A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.

  18. GOOSE 1.4 -- Generalized Object-Oriented Simulation Environment user`s manual

    SciTech Connect

    Nypaver, D.J.; Abdalla, M.A.; Guimaraes, L.

    1992-11-01

    The Generalized Object-Oriented Simulation Environment (GOOSE) is a new and innovative simulation tool that is being developed by the Simulation Group of the Advanced Controls Program at Oak Ridge National Laboratory. GOOSE is a fully interactive prototype software package that provides users with the capability of creating sophisticated mathematical models of physical systems. GOOSE uses an object-oriented approach to modeling and combines the concept of modularity (building a complex model easily from a collection of previously written components) with the additional features of allowing precompilation, optimization, and testing and validation of individual modules. Once a library of components has been defined and compiled, models can be built and modified without recompilation. This user`s manual provides detailed descriptions of the structure and component features of GOOSE, along with a comprehensive example using a simplified model of a pressurized water reactor.

  19. GOOSE 1. 4 -- Generalized Object-Oriented Simulation Environment user's manual

    SciTech Connect

    Nypaver, D.J. ); Abdalla, M.A. ); Guimaraes, L. , Sao Jose dos Campos, SP . Inst. de Estudos Avancados)

    1992-11-01

    The Generalized Object-Oriented Simulation Environment (GOOSE) is a new and innovative simulation tool that is being developed by the Simulation Group of the Advanced Controls Program at Oak Ridge National Laboratory. GOOSE is a fully interactive prototype software package that provides users with the capability of creating sophisticated mathematical models of physical systems. GOOSE uses an object-oriented approach to modeling and combines the concept of modularity (building a complex model easily from a collection of previously written components) with the additional features of allowing precompilation, optimization, and testing and validation of individual modules. Once a library of components has been defined and compiled, models can be built and modified without recompilation. This user's manual provides detailed descriptions of the structure and component features of GOOSE, along with a comprehensive example using a simplified model of a pressurized water reactor.

  20. Simulation and flight evaluation of a head-up landing aid for general aviation

    NASA Technical Reports Server (NTRS)

    Harris, R. L., Sr.; Goode, M. W.; Yenni, K. R.

    1978-01-01

    A head-up general aviation landing aid called a landing site indicator (LASI) was tested in a fixed-base, visual simulator and in an airplane to determine the effectiveness of the LASI. The display, which had a simplified format and method of implementation, presented to the pilot in his line of sight through the windshield a graphic representation of the airplane's velocity vector. In each testing model (simulation of flight), each of 4 pilots made 20 landing approaches with the LASI and 20 approaches without it. The standard deviations of approach and touchdown parameters were considered an indication of pilot consistency. Use of the LASI improved consistency and also reduced elevator, aileron, and rudder control activity. Pilots' comments indicated that the LASI reduced work load. An appendix is included with a discussion of the simulator effectiveness for visual flight tasks.

  1. Biomembrane simulations of 12 lipid types using the general amber force field in a tensionless ensemble.

    PubMed

    Coimbra, João T S; Sousa, Sérgio F; Fernandes, Pedro A; Rangel, Maria; Ramos, Maria J

    2014-01-01

    The AMBER family of force fields is one of the most commonly used alternatives to describe proteins and drug-like molecules in molecular dynamics simulations. However, the absence of a specific set of parameters for lipids has been limiting the widespread application of this force field in biomembrane simulations, including membrane protein simulations and drug-membrane simulations. Here, we report the systematic parameterization of 12 common lipid types consistent with the General Amber Force Field (GAFF), with charge-parameters determined with RESP at the HF/6-31G(d) level of theory, to be consistent with AMBER. The accuracy of the scheme was evaluated by comparing predicted and experimental values for structural lipid properties in MD simulations in an NPT ensemble with explicit solvent in 100:100 bilayer systems. Globally, a consistent agreement with experimental reference data on membrane structures was achieved for some lipid types when using the typical MD conditions normally employed when handling membrane proteins and drug-membrane simulations (a tensionless NPT ensemble, 310 K), without the application of any of the constraints often used in other biomembrane simulations (such as the surface tension and the total simulation box area). The present set of parameters and the universal approach used in the parameterization of all the lipid types described here, as well as the consistency with the AMBER force field family, together with the tensionless NPT ensemble used, opens the door to systematic studies combining lipid components with small drug-like molecules or membrane proteins and show the potential of GAFF in dealing with biomembranes.

  2. Towards Observational Astronomy of Jets in Active Galaxies from General Relativistic Magnetohydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Anantua, Richard; Roger Blandford, Jonathan McKinney and Alexander Tchekhovskoy

    2016-01-01

    We carry out the process of "observing" simulations of active galactic nuclei (AGN) with relativistic jets (hereafter called jet/accretion disk/black hole (JAB) systems) from ray tracing between image plane and source to convolving the resulting images with a point spread function. Images are generated at arbitrary observer angle relative to the black hole spin axis by implementing spatial and temporal interpolation of conserved magnetohydrodynamic flow quantities from a time series of output datablocks from fully general relativistic 3D simulations. We also describe the evolution of simulations of JAB systems' dynamical and kinematic variables, e.g., velocity shear and momentum density, respectively, and the variation of these variables with respect to observer polar and azimuthal angles. We produce, at frequencies from radio to optical, fixed observer time intensity and polarization maps using various plasma physics motivated prescriptions for the emissivity function of physical quantities from the simulation output, and analyze the corresponding light curves. Our hypothesis is that this approach reproduces observed features of JAB systems such as superluminal bulk flow projections and quasi-periodic oscillations in the light curves more closely than extant stylized analytical models, e.g., cannonball bulk flows. Moreover, our development of user-friendly, versatile C++ routines for processing images of state-of-the-art simulations of JAB systems may afford greater flexibility for observing a wide range of sources from high power BL-Lacs to low power quasars (possibly with the same simulation) without requiring years of observation using multiple telescopes. Advantages of observing simulations instead of observing astrophysical sources directly include: the absence of a diffraction limit, panoramic views of the same object and the ability to freely track features. Light travel time effects become significant for high Lorentz factor and small angles between

  3. Using Beowulf clusters to speed up neural simulations.

    PubMed

    Smith, Leslie S.

    2002-06-01

    Simulation of large neural systems on PCs requires large amounts of memory, and takes a long time. Parallel computers can speed them up. A new form of parallel computer, the Beowulf cluster, is an affordable version. Event-driven simulation and processor farming are two ways of exploiting this parallelism in neural simulations.

  4. Evaluation of the Event Driven Phenology Model Coupled with the VegET Evapotranspiration Model Through Comparisons with Reference Datasets in a Spatially Explicit Manner

    NASA Technical Reports Server (NTRS)

    Kovalskyy, V.; Henebry, G. M.; Adusei, B.; Hansen, M.; Roy, D. P.; Senay, G.; Mocko, D. M.

    2011-01-01

    A new model coupling scheme with remote sensing data assimilation was developed for estimation of daily actual evapotranspiration (ET). The scheme represents a mix of the VegET, a physically based model to estimate ET from a water balance, and an event driven phenology model (EDPM), where the EDPM is an empirically derived crop specific model capable of producing seasonal trajectories of canopy attributes. In this experiment, the scheme was deployed in a spatially explicit manner within the croplands of the Northern Great Plains. The evaluation was carried out using 2007-2009 land surface forcing data from the North American Land Data Assimilation System (NLDAS) and crop maps derived from remotely sensed data of NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). We compared the canopy parameters produced by the phenology model with normalized difference vegetation index (NDVI) data derived from the MODIS nadir bi-directional reflectance distribution function (BRDF) adjusted reflectance (NBAR) product. The expectations of the EDPM performance in prognostic mode were met, producing determination coefficient (r2) of 0.8 +/-.0.15. Model estimates of NDVI yielded root mean square error (RMSE) of 0.1 +/-.0.035 for the entire study area. Retrospective correction of canopy dynamics with MODIS NDVI brought the errors down to just below 10% of observed data range. The ET estimates produced by the coupled scheme were compared with ones from the MODIS land product suite. The expected r2=0.7 +/-.15 and RMSE = 11.2 +/-.4 mm per 8 days were met and even exceeded by the coupling scheme0 functioning in both prognostic and retrospective modes. Minor setbacks of the EDPM and VegET performance (r2 about 0.5 and additional 30 % of RMSR) were found on the peripheries of the study area and attributed to the insufficient EDPM training and to spatially varying accuracy of crop maps. Overall the experiment provided sufficient evidence of soundness and robustness of the EDPM and

  5. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    SciTech Connect

    Dong, S.

    2015-02-15

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  6. High-resolution numerical simulation of Venus atmosphere by AFES (Atmospheric general circulation model For the Earth Simulator)

    NASA Astrophysics Data System (ADS)

    Sugimoto, Norihiko; AFES project Team

    2016-10-01

    We have developed an atmospheric general circulation model (AGCM) for Venus on the basis of AFES (AGCM For the Earth Simulator) and performed a high-resolution simulation (e.g., Sugimoto et al., 2014a). The highest resolution is T639L120; 1920 times 960 horizontal grids (grid intervals are about 20 km) with 120 vertical layers (layer intervals are about 1 km). In the model, the atmosphere is dry and forced by the solar heating with the diurnal and semi-diurnal components. The infrared radiative process is simplified by adopting Newtonian cooling approximation. The temperature is relaxed to a prescribed horizontally uniform temperature distribution, in which a layer with almost neutral static stability observed in the Venus atmosphere presents. A fast zonal wind in a solid-body rotation is given as the initial state.Starting from this idealized superrotation, the model atmosphere reaches a quasi-equilibrium state within 1 Earth year and this state is stably maintained for more than 10 Earth years. The zonal-mean zonal flow with weak midlatitude jets has almost constant velocity of 120 m/s in latitudes between 45°S and 45°N at the cloud top levels, which agrees very well with observations. In the cloud layer, baroclinic waves develop continuously at midlatitudes and generate Rossby-type waves at the cloud top (Sugimoto et al., 2014b). At the polar region, warm polar vortex surrounded by a cold latitude band (cold collar) is well reproduced (Ando et al., 2016). As for horizontal kinetic energy spectra, divergent component is broadly (k > 10) larger than rotational component compared with that on Earth (Kashimura et al., in preparation). We will show recent results of the high-resolution run, e.g., small-scale gravity waves attributed to large-scale thermal tides. Sugimoto, N. et al. (2014a), Baroclinic modes in the Venus atmosphere simulated by GCM, Journal of Geophysical Research: Planets, Vol. 119, p1950-1968.Sugimoto, N. et al. (2014b), Waves in a Venus general

  7. General-relativistic Large-eddy Simulations of Binary Neutron Star Mergers

    NASA Astrophysics Data System (ADS)

    Radice, David

    2017-03-01

    The flow inside remnants of binary neutron star (NS) mergers is expected to be turbulent, because of magnetohydrodynamics instability activated at scales too small to be resolved in simulations. To study the large-scale impact of these instabilities, we develop a new formalism, based on the large-eddy simulation technique, for the modeling of subgrid-scale turbulent transport in general relativity. We apply it, for the first time, to the simulation of the late-inspiral and merger of two NSs. We find that turbulence can significantly affect the structure and survival time of the merger remnant, as well as its gravitational-wave (GW) and neutrino emissions. The former will be relevant for GW observation of merging NSs. The latter will affect the composition of the outflow driven by the merger and might influence its nucleosynthetic yields. The accretion rate after black hole formation is also affected. Nevertheless, we find that, for the most likely values of the turbulence mixing efficiency, these effects are relatively small and the GW signal will be affected only weakly by the turbulence. Thus, our simulations provide a first validation of all existing post-merger GW models.

  8. Digital-analog quantum simulation of generalized Dicke models with superconducting circuits

    NASA Astrophysics Data System (ADS)

    Lamata, Lucas

    2017-03-01

    We propose a digital-analog quantum simulation of generalized Dicke models with superconducting circuits, including Fermi- Bose condensates, biased and pulsed Dicke models, for all regimes of light-matter coupling. We encode these classes of problems in a set of superconducting qubits coupled with a bosonic mode implemented by a transmission line resonator. Via digital-analog techniques, an efficient quantum simulation can be performed in state-of-the-art circuit quantum electrodynamics platforms, by suitable decomposition into analog qubit-bosonic blocks and collective single-qubit pulses through digital steps. Moreover, just a single global analog block would be needed during the whole protocol in most of the cases, superimposed with fast periodic pulses to rotate and detune the qubits. Therefore, a large number of digital steps may be attained with this approach, providing a reduced digital error. Additionally, the number of gates per digital step does not grow with the number of qubits, rendering the simulation efficient. This strategy paves the way for the scalable digital-analog quantum simulation of many-body dynamics involving bosonic modes and spin degrees of freedom with superconducting circuits.

  9. Digital-analog quantum simulation of generalized Dicke models with superconducting circuits

    PubMed Central

    Lamata, Lucas

    2017-01-01

    We propose a digital-analog quantum simulation of generalized Dicke models with superconducting circuits, including Fermi- Bose condensates, biased and pulsed Dicke models, for all regimes of light-matter coupling. We encode these classes of problems in a set of superconducting qubits coupled with a bosonic mode implemented by a transmission line resonator. Via digital-analog techniques, an efficient quantum simulation can be performed in state-of-the-art circuit quantum electrodynamics platforms, by suitable decomposition into analog qubit-bosonic blocks and collective single-qubit pulses through digital steps. Moreover, just a single global analog block would be needed during the whole protocol in most of the cases, superimposed with fast periodic pulses to rotate and detune the qubits. Therefore, a large number of digital steps may be attained with this approach, providing a reduced digital error. Additionally, the number of gates per digital step does not grow with the number of qubits, rendering the simulation efficient. This strategy paves the way for the scalable digital-analog quantum simulation of many-body dynamics involving bosonic modes and spin degrees of freedom with superconducting circuits. PMID:28256559

  10. Nonparametric simulation-based statistics for detecting linkage in general pedigrees

    SciTech Connect

    Davis, S.; Schroeder, M.; Weeks, D.E.; Goldin, L.R.

    1996-04-01

    We present here four nonparametric statistics for linkage analysis that test whether pairs of affected relatives share marker alleles more often than expected. These statistics are based on simulating the null distribution of a given statistic conditional on the unaffecteds` marker genotypes. Each statistic uses a different measure of marker sharing: the SimAPM statistic uses the simulation-based affected-pedigree-member measure based on identity-by-state (IBS) sharing. The SimKIN (kinship) measure is 1.0 for identity-by-descent (IBD) sharing, 0.0 for no IBD sharing, and the kinship coefficient when the IBD status is ambiguous. The simulation-based IBD (SimIBD) statistic uses a recursive algorithm to determine the probability of two affecteds sharing a specific allele IBD. The SimISO statistic is identical to SimIBD, except that it also measures marker similarity between unaffected pairs. We evaluated our statistics on data simulated under different two-locus disease models, comparing our results to those obtained with several other nonparametric statistics. Use of IBD information produces dramatic increases in power over the SimAPM method, which uses only IBS information. The power of our best statistic in most cases meets or exceeds the power of the other nonparametric statistics. Furthermore, our statistics perform comparisons between all affected relative pairs within general pedigrees and are not restricted to sib pairs or nuclear families. 32 refs., 5 figs., 6 tabs.

  11. Nonparametric simulation-based statistics for detecting linkage in general pedigrees.

    PubMed Central

    Davis, S.; Schroeder, M.; Goldin, L. R.; Weeks, D. E.

    1996-01-01

    We present here four nonparametric statistics for linkage analysis that test whether pairs of affected relatives share marker alleles more often than expected. These statistics are based on simulating the null distribution of a given statistic conditional on the unaffecteds' marker genotypes. Each statistic uses a different measure of marker sharing: the SimAPM statistic uses the simulation-based affected-pedigree-member measure based on identity-by-state (IBS) sharing. The SimKIN (kinship) measure is 1.0 for identity-by-descent (IBD) sharing, 0.0 for no IBD status sharing, and the kinship coefficient when the IBD status is ambiguous. The simulation-based IBD (SimIBD) statistic uses a recursive algorithm to determine the probability of two affecteds sharing a specific allele IBD. The SimISO statistic is identical to SimIBD, except that it also measures marker similarity between unaffected pairs. We evaluated our statistics on data simulated under different two-locus disease models, comparing our results to those obtained with several other nonparametric statistics. Use of IBD information produces dramatic increases in power over the SimAPM method, which uses only IBS information. The power of our best statistic in most cases meets or exceeds the power of the other nonparametric statistics. Furthermore, our statistics perform comparisons between all affected relative pairs within general pedigrees and are not restricted to sib pairs or nuclear families. PMID:8644751

  12. Towards a generalized catchment flood processes simulation system with distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Chen, Y.

    2015-12-01

    High resolution distributed hydrological model is regarded as to have the potential to finely simulate the catchment hydrological processes, but challenges still exist. This paper, presented a generalized catchment flood processes simulation system with Liuxihe Model, a physically-based distributed hydrological model proposed mainly for catchment flood forecasting, which is a process-based hydrological model. In this system, several cutting edge technologies have been employed, such as the supercomputing technology, PSO algorithm for parameter optimization, cloud computation, GIS and software engineering, and it is deployed on a high performance computer with free public accesses. The model structure setting up data used in this system is the open access database, so it could be used for catchments world widely. With the application of parallel computation algorithm, the model spatial resolution could be as fine as up to 100 m grid, while maintaining high computation efficiency, and could be used in large scale catchments. With the utilization of parameter optimization method, the model performance cold be improved largely. The flood events of several catchments in southern China with different drainage sizes have been simulated by this system, and the results show that this system has strong capability in simulating catchment flood events even in large river basins.

  13. Automated procedure for developing hybrid computer simulations of turbofan engines. Part 1: General description

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.; Bruton, W. M.

    1982-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.

  14. General Relativistic Magnetohydrodynamics Simulations of Tilted Black Hole Accretion Flows and Their Radiative Properties

    NASA Astrophysics Data System (ADS)

    Shiokawa, Hotaka; Gammie, C. F.; Dolence, J.; Noble, S. C.

    2013-01-01

    We perform global General Relativistic Magnetohydrodynamics (GRMHD) simulations of non-radiative, magnetized disks that are initially tilted with respect to the black hole's spin axis. We run the simulations with different size and tilt angle of the tori for 2 different resolutions. We also perform radiative transfer using Monte Carlo based code that includes synchrotron emission, absorption and Compton scattering to obtain spectral energy distribution and light curves. Similar work was done by Fragile et al. (2007) and Dexter & Fragile (2012) to model the super massive black hole SgrA* with tilted accretion disks. We compare our results of fully conservative hydrodynamic code and spectra that include X-ray, with their results.

  15. User's guide for a general purpose dam-break flood simulation model (K-634)

    USGS Publications Warehouse

    Land, Larry F.

    1981-01-01

    An existing computer program for simulating dam-break floods for forecast purposes has been modified with an emphasis on general purpose applications. The original model was formulated, developed and documented by the National Weather Service. This model is based on the complete flow equations and uses a nonlinear implicit finite-difference numerical method. The first phase of the simulation routes a flood wave through the reservoir and computes an outflow hydrograph which is the sum of the flow through the dam 's structures and the gradually developing breach. The second phase routes this outflow hydrograph through the stream which may be nonprismatic and have segments with subcritical or supercritical flow. The results are discharge and stage hydrographs at the dam as well as all of the computational nodes in the channel. From these hydrographs, peak discharge and stage profiles are tabulated. (USGS)

  16. A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations

    SciTech Connect

    Rockway, J D; Champagne, N J; Sharpe, R M; Fasenfest, B

    2004-01-14

    Frequency domain techniques are popular for analyzing electromagnetics (EM) and coupled circuit-EM problems. These techniques, such as the method of moments (MoM) and the finite element method (FEM), are used to determine the response of the EM portion of the problem at a single frequency. Since only one frequency is solved at a time, it may take a long time to calculate the parameters for wideband devices. In this paper, a fast frequency sweep based on the Asymptotic Wave Expansion (AWE) method is developed and applied to generalized mixed circuit-EM problems. The AWE method, which was originally developed for lumped-load circuit simulations, has recently been shown to be effective at quasi-static and low frequency full-wave simulations. Here it is applied to a full-wave MoM solver, capable of solving for metals, dielectrics, and coupled circuit-EM problems.

  17. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    PubMed

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  18. Dust Emissions, Transport, and Deposition Simulated with the NASA Finite-Volume General Circulation Model

    NASA Technical Reports Server (NTRS)

    Colarco, Peter; daSilva, Arlindo; Ginoux, Paul; Chin, Mian; Lin, S.-J.

    2003-01-01

    Mineral dust aerosols have radiative impacts on Earth's atmosphere, have been implicated in local and regional air quality issues, and have been identified as vectors for transporting disease pathogens and bringing mineral nutrients to terrestrial and oceanic ecosystems. We present for the first time dust simulations using online transport and meteorological analysis in the NASA Finite-Volume General Circulation Model (FVGCM). Our dust formulation follows the formulation in the offline Georgia Institute of Technology-Goddard Global Ozone Chemistry Aerosol Radiation and Transport Model (GOCART) using a topographical source for dust emissions. We compare results of the FVGCM simulations with GOCART, as well as with in situ and remotely sensed observations. Additionally, we estimate budgets of dust emission and transport into various regions.

  19. GENERAL-RELATIVISTIC SIMULATIONS OF THREE-DIMENSIONAL CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    Ott, Christian D.; Abdikamalov, Ernazar; Moesta, Philipp; Haas, Roland; Drasco, Steve; O'Connor, Evan P.; Reisswig, Christian; Meakin, Casey A.; Schnetter, Erik

    2013-05-10

    We study the three-dimensional (3D) hydrodynamics of the post-core-bounce phase of the collapse of a 27 M{sub Sun} star and pay special attention to the development of the standing accretion shock instability (SASI) and neutrino-driven convection. To this end, we perform 3D general-relativistic simulations with a three-species neutrino leakage scheme. The leakage scheme captures the essential aspects of neutrino cooling, heating, and lepton number exchange as predicted by radiation-hydrodynamics simulations. The 27 M{sub Sun} progenitor was studied in 2D by Mueller et al., who observed strong growth of the SASI while neutrino-driven convection was suppressed. In our 3D simulations, neutrino-driven convection grows from numerical perturbations imposed by our Cartesian grid. It becomes the dominant instability and leads to large-scale non-oscillatory deformations of the shock front. These will result in strongly aspherical explosions without the need for large-scale SASI shock oscillations. Low-l-mode SASI oscillations are present in our models, but saturate at small amplitudes that decrease with increasing neutrino heating and vigor of convection. Our results, in agreement with simpler 3D Newtonian simulations, suggest that once neutrino-driven convection is started, it is likely to become the dominant instability in 3D. Whether it is the primary instability after bounce will ultimately depend on the physical seed perturbations present in the cores of massive stars. The gravitational wave signal, which we extract and analyze for the first time from 3D general-relativistic models, will serve as an observational probe of the postbounce dynamics and, in combination with neutrinos, may allow us to determine the primary hydrodynamic instability.

  20. SIMPSON: A general simulation program for solid-state NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.

    2011-12-01

    A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tel scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple ID experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.

  1. SIMPSON: A General Simulation Program for Solid-State NMR Spectroscopy

    NASA Astrophysics Data System (ADS)

    Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.

    2000-12-01

    A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tcl scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple 1D experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.

  2. Formulation and simulation of the generalized ion viscous stress tensor in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Addae-Kagyah, Michael K.

    2007-12-01

    Details of the generalized model of the parallel ion viscous stress tensor, pi∥, is presented in this work. Kinetic-based derivation of pi∥, employing a Chapman-Enskog-like (CEL) expansion of the plasma particle distribution function, is part of a broad research effort aimed at incorporating suitable kinetic physics into the physical modeling of tenuous, high-temperature (fusion-grade) plasmas. Often, this goal is achieved through the use of generalized, integral closures in the evolution equations of fluid quantities, which correspond to low-order velocity-space moments of the particle distribution functions. The primary analytical task in the formulation of pi∥ is the derivation of a drift kinetic equation (DKE) from the plasma kinetic equation (via appropriate gyro-averaging and ordering schemes). Next, the time-dependent DKE is solved for the kinetic distortion by reducing it to a system of coupled, linear equations, that results from an expansion in Legendre polynomials, and the correct exploitation of their orthogonality properties. The tensor, pi∥ , is calculated in the final step as a second-order velocity-space moment of the kinetic distortion term in the CEL expansion. This is a steady-state version of pi∥, which is valid for arbitrary collision and transit frequencies. The upgraded theory reproduces Braginskii's pi ∥ in the regime of high collisionality, and agrees with Chang and Callen's results in the nearly collisionless regime. Subsequently, a time-dependent version of pi∥ (incorporating an exact form of linearized Coulomb collision operator) is formulated, as an enhancement to the steady-state model. Numerical simulations of three known physical phenomena in plasmas, incorporating finite effects of the steady-state, generalized pi∥, are executed in slab geometry, using the NIMROD simulation code. Specifically, ion acoustic wave propagation and dissipation, stress-induced ion heating, and parallel ion momentum (or flow) flattening

  3. Terahertz spectroscopic polarimetry of generalized anisotropic media composed of Archimedean spiral arrays: Experiments and simulations

    NASA Astrophysics Data System (ADS)

    Aschaffenburg, Daniel J.; Williams, Michael R. C.; Schmuttenmaer, Charles A.

    2016-05-01

    Terahertz time-domain spectroscopic polarimetry has been used to measure the polarization state of all spectral components in a broadband THz pulse upon transmission through generalized anisotropic media consisting of two-dimensional arrays of lithographically defined Archimedean spirals. The technique allows a full determination of the frequency-dependent, complex-valued transmission matrix and eigenpolarizations of the spiral arrays. Measurements were made on a series of spiral array orientations. The frequency-dependent transmission matrix elements as well as the eigenpolarizations were determined, and the eigenpolarizations were found be to elliptically corotating, as expected from their symmetry. Numerical simulations are in quantitative agreement with measured spectra.

  4. Mars atmospheric dynamics as simulated by the NASA AMES General Circulation Model. II - Transient baroclinic eddies

    NASA Astrophysics Data System (ADS)

    Barnes, J. R.; Pollack, J. B.; Haberle, R. M.; Leovy, C. B.; Zurek, R. W.; Lee, H.; Schaeffer, J.

    1993-02-01

    A large set of experiments performed with the NASA Ames Mars General Circulation Model is analyzed to determine the properties, structure, and dynamics of the simulated transient baroclinic eddies. There is strong transient baroclinic eddy activity in the extratropics of the Northern Hemisphere during the northern autumn, winter, and spring seasons. The eddy activity remains strong for very large dust loadings, though it shifts northward. The eastward propagating eddies are characterized by zonal wavenumbers of 1-4 and periods of about 2-10 days. The properties of the GCM baroclinic eddies in the northern extratropics are compared in detail with analogous properties inferred from Viking Lander meteorology observations.

  5. Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing.

    PubMed

    Menin, O H; Martinez, A S; Costa, A M

    2016-05-01

    A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present.

  6. Parallel implementation of the FETI-DPEM algorithm for general 3D EM simulations

    NASA Astrophysics Data System (ADS)

    Li, Yu-Jia; Jin, Jian-Ming

    2009-05-01

    A parallel implementation of the electromagnetic dual-primal finite element tearing and interconnecting algorithm (FETI-DPEM) is designed for general three-dimensional (3D) electromagnetic large-scale simulations. As a domain decomposition implementation of the finite element method, the FETI-DPEM algorithm provides fully decoupled subdomain problems and an excellent numerical scalability, and thus is well suited for parallel computation. The parallel implementation of the FETI-DPEM algorithm on a distributed-memory system using the message passing interface (MPI) is discussed in detail along with a few practical guidelines obtained from numerical experiments. Numerical examples are provided to demonstrate the efficiency of the parallel implementation.

  7. Generalization of vapor pressure lowering effects in an existing geothermal simulator

    SciTech Connect

    Shook, G.M.

    1993-06-01

    Thermodynamic properties of pore water are shown to be different from those of bulk water because of interfacial forces between the aqueous and solid phases. This {open_quotes}vapor-pressure lowering{close_quotes} (VPL) effect is described through Kelvin`s equation, which relates VPL to properties of the liquid phase. An algorithm that accounts for VPL had previously been implented in the geothermal simulator TETRAD. This algorithm applies to a narrow range of reservoir properties, and in some cases leads in inconsistencies. This report presents a generalization of the VPL algorithm which removes many of its limitations. The governing equations for the generalization are presented, assumptions and limitations of the method are discussed, and the modifications are validated.

  8. General Relativistic Simulations of Magnetized Plasmas around Merging Supermassive Black Holes

    NASA Astrophysics Data System (ADS)

    Giacomazzo, Bruno; Baker, John G.; Miller, M. Coleman; Reynolds, Christopher S.; van Meter, James R.

    2012-06-01

    Coalescing supermassive black hole binaries are produced by the mergers of galaxies and are the most powerful sources of gravitational waves accessible to space-based gravitational observatories. Some such mergers may occur in the presence of matter and magnetic fields and hence generate an electromagnetic counterpart. In this Letter, we present the first general relativistic simulations of magnetized plasma around merging supermassive black holes using the general relativistic magnetohydrodynamic code Whisky. By considering different magnetic field strengths, going from non-magnetically dominated to magnetically dominated regimes, we explore how magnetic fields affect the dynamics of the plasma and the possible emission of electromagnetic signals. In particular, we observe a total amplification of the magnetic field of ~2 orders of magnitude, which is driven by the accretion onto the binary and that leads to much stronger electromagnetic signals, more than a factor of 104 larger than comparable calculations done in the force-free regime where such amplifications are not possible.

  9. Development of generalized mapping tools to improve implementation of data driven computer simulations (04-ERD-083)

    SciTech Connect

    Ramirez, A; Pasyanos, M; Franz, G A

    2004-09-17

    The Stochastic Engine (SE) is a data driven computer simulation tool for predicting the characteristics of complex systems. The SE integrates accurate simulators with the Monte Carlo Markov Chain (MCMC) approach (a stochastic inverse technique) to identify alternative models that are consistent with available data and ranks these alternatives according to their probabilities. Implementation of the SE is currently cumbersome owing to the need to customize the pre-processing and processing steps that are required for a specific application. This project widens the applicability of the Stochastic Engine by generalizing some aspects of the method (i.e. model-to-data transformation types, configuration, model representation). We have generalized several of the transformations that are necessary to match the observations to proposed models. These transformations are sufficiently general not to pertain to any single application. This approach provides a framework that increases the efficiency of the SE implementation. The overall goal is to reduce response time and make the approach as ''plug-and-play'' as possible, and will result in the rapid accumulation of new data types for a host of both earth science and non-earth science problems. When adapting the SE approach to a specific application, there are various pre-processing and processing steps that are typically needed to run a specific problem. Many of these steps are common to a wide variety of specific applications. Here we list and describe several data transformations that are common to a variety of subsurface inverse problems. A subset of these steps have been developed in a generalized form such that they could be used with little or no modifications in a wide variety of specific applications. This work was funded by the LDRD Program (tracking number 04-ERD-083).

  10. Tropospheric ozone simulation with a chemistry-general circulation model: Influence of higher hydrocarbon chemistry

    NASA Astrophysics Data System (ADS)

    Roelofs, Geert-Jan; Lelieveld, Jos

    2000-09-01

    We present an improved version of the global chemistry-general circulation model of Roelofs and Lelieveld [1997]. The major model improvement is the representation of higher hydrocarbon chemistry, implemented by means of the Carbon Bond Mechanism 4 (CBM-4). Simulated tropospheric ozone concentrations at remote locations, which agreed well with observations in the previous model version, are not affected much by the chemistry of higher hydrocarbons. However, ozone formation in the polluted boundary layer is significantly enhanced, resulting in a more realistic simulation of surface ozone in regions such as North America, Europe, and Southeast Asia. Our model simulates a net global tropospheric ozone production of 73 Tg yr-1 when higher hydrocarbon chemistry is considered, and -36 Tg yr-1 without higher hydrocarbon chemistry. The simulated seasonality of surface CO agrees well with observations. However, the southern hemispheric maximum for O3 and CO associated with biomass burning emissions is delayed by 1 month compared to the observations, which demonstrates the need for a better representation of biomass burning emissions. Simulated peroxyacetyl nitrate (PAN) concentrations agree well with observed values, although the variability is underestimated. OH decreases strongly in the continental boundary layer due to its reaction with higher hydrocarbons. However, this is almost compensated by an increase of OH over oceans in the lower half of the troposphere. Consideration of higher hydrocarbon chemistry decreases the global annual tropospheric OH concentration by about 8% compared to a background tropospheric chemistry scheme. Further, the radiative forcing by anthropogenically increased tropospheric ozone on the northern hemisphere increases, especially in July. The forcing also increases on the southern hemisphere where biomass burning emissions produce tropospheric ozone, except between December and June, that is, outside the biomass burning season, when ozone

  11. A Novel Approach for Modeling Chemical Reaction in Generalized Fluid System Simulation Program

    NASA Technical Reports Server (NTRS)

    Sozen, Mehmet; Majumdar, Alok

    2002-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a computer code developed at NASA Marshall Space Flight Center for analyzing steady state and transient flow rates, pressures, temperatures, and concentrations in a complex flow network. The code, which performs system level simulation, can handle compressible and incompressible flows as well as phase change and mixture thermodynamics. Thermodynamic and thermophysical property programs, GASP, WASP and GASPAK provide the necessary data for fluids such as helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, water, a hydrogen, isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, several refrigerants, nitrogen trifluoride and ammonia. The program which was developed out of need for an easy to use system level simulation tool for complex flow networks, has been used for the following purposes to name a few: Space Shuttle Main Engine (SSME) High Pressure Oxidizer Turbopump Secondary Flow Circuits, Axial Thrust Balance of the Fastrac Engine Turbopump, Pressurized Propellant Feed System for the Propulsion Test Article at Stennis Space Center, X-34 Main Propulsion System, X-33 Reaction Control System and Thermal Protection System, and International Space Station Environmental Control and Life Support System design. There has been an increasing demand for implementing a combustion simulation capability into GFSSP in order to increase its system level simulation capability of a liquid rocket propulsion system starting from the propellant tanks up to the thruster nozzle for spacecraft as well as launch vehicles. The present work was undertaken for addressing this need. The chemical equilibrium equations derived from the second law of thermodynamics and the energy conservation equation derived from the first law of thermodynamics are solved simultaneously by a Newton-Raphson method. The numerical scheme was implemented as a User

  12. Simulation of reactive nanolaminates using reduced models: III. Ingredients for a general multidimensional formulation

    SciTech Connect

    Salloum, Maher; Knio, Omar M.

    2010-06-15

    A transient multidimensional reduced model is constructed for the simulation of reaction fronts in Ni/Al multilayers. The formulation is based on the generalization of earlier methodologies developed for quasi-1D axial and normal propagation, specifically by adapting the reduced formalism for atomic mixing and heat release. This approach enables us to focus on resolving the thermal front structure, whose evolution is governed by thermal diffusion and heat release. A mixed integration scheme is used for this purpose, combining an extended-stability, Runge-Kutta-Chebychev (RKC) integration of the diffusion term with exact treatment of the chemical source term. Thus, a detailed description of atomic mixing within individual layers is avoided, which enables transient modeling of the reduced equations of motion in multiple dimensions. Two-dimensional simulations are first conducted of front propagation in composites combining two bilayer periods. Results are compared with the experimental measurements of Knepper et al., which reveal that the reaction velocity can depend significantly on layering frequency. The comparison indicates that, using a concentration-dependent conductivity model, the transient 2D computations can reasonably reproduce the experimental behavior. Additional tests are performed based on 3D computations of surface initiated reactions. Comparison of computed predictions with laser ignition measurements indicates that the computations provide reasonable estimates of ignition thresholds. A detailed discussion is finally provided of potential generalizations and associated hurdles. (author)

  13. Variable-resolution frameworks for the simulation of tropical cyclones in global atmospheric general circulation models

    NASA Astrophysics Data System (ADS)

    Zarzycki, Colin

    The ability of atmospheric General Circulation Models (GCMs) to resolve tropical cyclones in the climate system has traditionally been difficult. The challenges include adequately capturing storms which are small in size relative to model grids and the fact that key thermodynamic processes require a significant level of parameterization. At traditional GCM grid spacings of 50-300 km tropical cyclones are severely under-resolved, if not completely unresolved. This thesis explores a variable-resolution global model approach that allows for high spatial resolutions in areas of interest, such as low-latitude ocean basins where tropical cyclogenesis occurs. Such GCM designs with multi-resolution meshes serve to bridge the gap between globally-uniform grids and limited area models and have the potential to become a future tool for regional climate assessments. A statically-nested, variable-resolution option has recently been introduced into the Department of Energy/National Center for Atmospheric Research (DoE/NCAR) Community Atmosphere Model's (CAM) Spectral Element (SE) dynamical core. Using an idealized tropical cyclone test, variable-resolution meshes are shown to significantly lessen computational requirements in regional GCM studies. Furthermore, the tropical cyclone simulations are free of spurious numerical errors at the resolution interfaces. Utilizing aquaplanet simulations as an intermediate test between idealized simulations and fully-coupled climate model runs, climate statistics within refined patches are shown to be well-matched to globally-uniform simulations of the same grid spacing. Facets of the CAM version 4 (CAM4) subgrid physical parameterizations are likely too scale sensitive for variable-resolution applications, but the newer CAM5 package is vastly improved in performance at multiple grid spacings. Multi-decadal simulations following 'Atmospheric Model Intercomparison Project' protocols have been conducted with variable-resolution grids. Climate

  14. Simulating botulinum neurotoxin with constant pH molecular dynamics in Generalized Born implicit solvent

    NASA Astrophysics Data System (ADS)

    Chen, Yongzhi; Chen, Xin; Deng, Yuefan

    2007-07-01

    A new method was proposed by Mongan et al. for constant pH molecular dynamics simulation and was implemented in AMBER 8 package. Protonation states are modeled with different charge sets, and titrating residues are sampled from a Boltzmann distribution of protonation states. The simulation periodically adopts Monte Carlo sampling based on Generalized Born (GB) derived energies. However, when this approach was applied to a bio-toxin, Botulinum Neurotoxin Type A (BoNT/A) at pH 4.4, 4.7, 5.0, 6.8 and 7.2, the pK predictions yielded by the method were inconsistent with the experimental values. The systems being simulated were divergent. Furthermore, the system behaviors in a very weak acidic solution (pH 6.8) and in a very weak basic solution (pH 7.2) were significantly different from the neutral case (pH 7.0). Hence, we speculate this method may require further study for modeling large biomolecule.

  15. Martian atmospheric gravity waves simulated by a high-resolution general circulation model

    NASA Astrophysics Data System (ADS)

    Kuroda, Takeshi; Yiǧit, Erdal; Medvedev, Alexander S.; Hartogh, Paul

    2016-07-01

    Gravity waves (GWs) significantly affect temperature and wind fields in the Martian middle and upper atmosphere. They are also one of the observational targets of the MAVEN mission. We report on the first simulations with a high-resolution general circulation model (GCM) and present a global distributions of small-scale GWs in the Martian atmosphere. The simulated GW-induced temperature variances are in a good agreement with available radio occultation data in the lower atmosphere between 10 and 30 km. For the northern winter solstice, the model reveals a latitudinal asymmetry with stronger wave generation in the winter hemisphere and two distinctive sources of GWs: mountainous regions and the meandering winter polar jet. Orographic GWs are filtered upon propagating upward, and the mesosphere is primarily dominated by harmonics with faster horizontal phase velocities. Wave fluxes are directed mainly against the local wind. GW dissipation in the upper mesosphere generates a body force per unit mass of tens of m s^{-1} per Martian solar day (sol^{-1}), which tends to close the simulated jets. The results represent a realistic surrogate for missing observations, which can be used for constraining GW parameterizations and validating GCMs.

  16. Molecular dynamics simulation on generalized stacking fault energies of FCC metals under preloading stress

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Cheng, Lü; Kiet, Tieu; Zhao, Xing; Pei, Lin-Qing; Guillaume, Michal

    2015-08-01

    Molecular dynamics (MD) simulations are performed to investigate the effects of stress on generalized stacking fault (GSF) energy of three fcc metals (Cu, Al, and Ni). The simulation model is deformed by uniaxial tension or compression in each of [111], [11-2], and [1-10] directions, respectively, before shifting the lattice to calculate the GSF curve. Simulation results show that the values of unstable stacking fault energy (γusf), stable stacking fault energy (γsf), and unstable twin fault energy (γutf) of the three elements can change with the preloaded tensile or compressive stress in different directions. The ratio of γsf/γusf, which is related to the energy barrier for full dislocation nucleation, and the ratio of γutf/γusf, which is related to the energy barrier for twinning formation are plotted each as a function of the preloading stress. The results of this study reveal that the stress state can change the energy barrier of defect nucleation in the crystal lattice, and thereby can play an important role in the deformation mechanism of nanocrystalline material. Project supported by Australia Research Council Discovery Projects (Grant No. DP130103973). L. Zhang, X. Zhao and L. Q. Pei were financially supported by the China Scholarship Council (CSC).

  17. Global numerical simulations of the rise of vortex-mediated pulsar glitches in full general relativity

    NASA Astrophysics Data System (ADS)

    Sourie, A.; Chamel, N.; Novak, J.; Oertel, M.

    2017-02-01

    In this paper, we study in detail the role of general relativity on the global dynamics of giant pulsar glitches as exemplified by Vela. For this purpose, we carry out numerical simulations of the spin up triggered by the sudden unpinning of superfluid vortices. In particular, we compute the exchange of angular momentum between the core neutron superfluid and the rest of the star within a two-fluid model including both (non-dissipative) entrainment effects and (dissipative) mutual friction forces. Our simulations are based on a quasi-stationary approach using realistic equations of state (EoSs). We show that the evolution of the angular velocities of both fluids can be accurately described by an exponential law. The associated characteristic rise time τr, which can be precisely computed from stationary configurations only, has a form similar to that obtained in the Newtonian limit. However, general relativity changes the structure of the star and leads to additional couplings between the fluids due to frame-dragging effects. As a consequence, general relativity can have a large impact on the actual value of τr: the errors incurred by using Newtonian gravity are thus found to be as large as ˜40 per cent for the models considered. Values of the rise time are calculated for Vela and compared with current observational limits. Finally, we study the amount of gravitational waves emitted during a glitch. Simple expressions are obtained for the corresponding characteristic amplitudes and frequencies. The detectability of glitches through gravitational wave observatories is briefly discussed.

  18. A comparison between general circulation model simulations using two sea surface temperature datasets for January 1979

    NASA Technical Reports Server (NTRS)

    Ose, Tomoaki; Mechoso, Carlos; Halpern, David

    1994-01-01

    Simulations with the UCLA atmospheric general circulation model (AGCM) using two different global sea surface temperature (SST) datasets for January 1979 are compared. One of these datasets is based on Comprehensive Ocean-Atmosphere Data Set (COADS) (SSTs) at locations where there are ship reports, and climatology elsewhere; the other is derived from measurements by instruments onboard NOAA satellites. In the former dataset (COADS SST), data are concentrated along shipping routes in the Northern Hemisphere; in the latter dataset High Resolution Infrared Sounder (HIRS SST), data cover the global domain. Ensembles of five 30-day mean fields are obtained from integrations performed in the perpetual-January mode. The results are presented as anomalies, that is, departures of each ensemble mean from that produced in a control simulation with climatological SSTs. Large differences are found between the anomalies obtained using COADS and HIRS SSTs, even in the Northern Hemisphere where the datasets are most similar to each other. The internal variability of the circulation in the control simulation and the simulated atmospheric response to anomalous forcings appear to be linked in that the pattern of geopotential height anomalies obtained using COADS SSTs resembles the first empirical orthogonal function (EOF 1) in the control simulation. The corresponding pattern obtained using HIRS SSTs is substantially different and somewhat resembles EOF 2 in the sector from central North America to central Asia. To gain insight into the reasons for these results, three additional simulations are carried out with SST anomalies confined to regions where COADS SSTs are substantially warmer than HIRS SSTs. The regions correspond to warm pools in the northwest and northeast Pacific, and the northwest Atlantic. These warm pools tend to produce positive geopotential height anomalies in the northeastern part of the corresponding oceans. Both warm pools in the Pacific produce large

  19. Assessment of the performance of general practitioners by the use of standardized (simulated) patients.

    PubMed Central

    Rethans, J J; Sturmans, F; Drop, R; van der Vleuten, C

    1991-01-01

    A study was undertaken whereby a set of standardized (simulated) patients visited general practitioners without being detected, in a health care system where doctors had fixed patient lists. Thirty nine general practitioners were each visited during normal surgery hours by four standardized patients who were designed to be indistinguishable from real patients. The objective of the study was to see whether the actual performance of general practitioners, as assessed by standardized patients, met predetermined consensus standards of care for actual practice. The patients presented standardized accounts of headache, diarrhoea, shoulder pain and diabetes. The mean group scores of the doctors on the predefined standards of care for the different complaints ranged from 33 to 68%. The results show that standardized patients may be the method of choice in the assessment of the quality of actual care of doctors. It is hypothesized that the substandard scores of the doctors do not reflect inadequate competence, but are a result of the difference between competence and performance. PMID:2031767

  20. A simulation study of control and display requirements for zero-experience general aviation pilots

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    1993-01-01

    The purpose of this simulation study was to define the basic human factor requirements for operating an airplane in all weather conditions. The basic human factors requirements are defined as those for an operator who is a complete novice for airplane operations but who is assumed to have automobile driving experience. These operators thus have had no piloting experience or training of any kind. The human factor requirements are developed for a practical task which includes all of the basic maneuvers required to go from one airport to another airport in limited visibility conditions. The task was quite demanding including following a precise path with climbing and descending turns while simultaneously changing airspeed. The ultimate goal of this research is to increase the utility of general aviation airplanes - that is, to make them a practical mode of transportation for a much larger segment of the general population. This can be accomplished by reducing the training and proficiency requirements of pilots while improving the level of safety. It is believed that advanced technologies such as fly-by-wire (or light), and head-up pictorial displays can be of much greater benefit to the general aviation pilot than to the full-time, professional pilot.

  1. A generalized prestressing algorithm for finite element simulations of preloaded geometries with application to the aorta.

    PubMed

    Weisbecker, Hannah; Pierce, David M; Holzapfel, Gerhard A

    2014-09-01

    Finite element models reconstructed from medical imaging data, for example, computed tomography or MRI scans, generally represent geometries under in vivo load. Classical finite element approaches start from an unloaded reference configuration. We present a generalized prestressing algorithm based on a concept introduced by Gee et al. (Int. J. Num. Meth. Biomed. Eng. 26:52-72, 2012) in which an incremental update of the displacement field in the classical approach is replaced by an incremental update of the deformation gradient field. Our generalized algorithm can be implemented in existing finite element codes with relatively low implementation effort on the element level and is suitable for material models formulated in the current or initial configurations. Applicable to any finite element simulations started from preloaded geometries, we demonstrate the algorithm and its convergence properties on an academic example and on a segment of a thoracic aorta meshed from MRI data. Furthermore, we present an example to discuss the influence of neglecting prestresses in geometries obtained from medical images, a topic on which conflicting statements are found in the literature.

  2. Relations between winter precipitation and atmospheric circulation simulated by the Geophysical Fluid Dynamics Laboratory general circulation model

    USGS Publications Warehouse

    McCabe, G.J.; Dettinger, M.D.

    1995-01-01

    General circulation model (GCM) simulations of atmospheric circulation are more reliable than GCM simulations of temperature and precipitation. In this study, temporal correlations between 700 hPa height anomalies simulated winter precipitation at eight locations in the conterminous United States are compared with corresponding correlations in observations. The objectives are to 1) characterize the relations between atmospheric circulation and winter precipitation simulated by the GFDL, GCM for selected locations in the conterminous USA, ii) determine whether these relations are similar to those found in observations of the actual climate system, and iii) determine if GFDL-simulated precipitation is forced by the same circulation patterns as in the real atmosphere. -from Authors

  3. Venus atmosphere simulated by a high-resolution general circulation model

    NASA Astrophysics Data System (ADS)

    Sugimoto, Norihiko

    2016-07-01

    An atmospheric general circulation model (AGCM) for Venus on the basis of AFES (AGCM For the Earth Simulator) have been developed (e.g., Sugimoto et al., 2014a) and a very high-resolution simulation is performed. The highest resolution of the model is T319L120; 960 times 480 horizontal grids (grid intervals are about 40 km) with 120 vertical layers (layer intervals are about 1 km). In the model, the atmosphere is dry and forced by the solar heating with the diurnal and semi-diurnal components. The infrared radiative process is simplified by adopting Newtonian cooling approximation. The temperature is relaxed to a prescribed horizontally uniform temperature distribution, in which a layer with almost neutral static stability observed in the Venus atmosphere presents. A fast zonal wind in a solid-body rotation is given as the initial state. Starting from this idealized superrotation, the model atmosphere reaches a quasi-equilibrium state within 1 Earth year and this state is stably maintained for more than 10 Earth years. The zonal-mean zonal flow with weak midlatitude jets has almost constant velocity of 120 m/s in latitudes between 45°S and 45°N at the cloud top levels, which agrees very well with observations. In the cloud layer, baroclinic waves develop continuously at midlatitudes and generate Rossby-type waves at the cloud top (Sugimoto et al., 2014b). At the polar region, warm polar vortex zonally surrounded by a cold latitude band (cold collar) is well reproduced (Ando et al., 2016). As for horizontal kinetic energy spectra, divergent component is broadly (k>10) larger than rotational component compared with that on Earth (Kashimura et al., in preparation). Finally, recent results for thermal tides and small-scale waves will be shown in the presentation. Sugimoto, N. et al. (2014a), Baroclinic modes in the Venus atmosphere simulated by GCM, Journal of Geophysical Research: Planets, Vol. 119, p1950-1968. Sugimoto, N. et al. (2014b), Waves in a Venus general

  4. Development and Implementation of Non-Newtonian Rheology Into the Generalized Fluid System Simulation Program (GFSSP)

    NASA Technical Reports Server (NTRS)

    DiSalvo, Roberto; Deaconu, Stelu; Majumdar, Alok

    2006-01-01

    One of the goals of this program was to develop the experimental and analytical/computational tools required to predict the flow of non-Newtonian fluids through the various system components of a propulsion system: pipes, valves, pumps etc. To achieve this goal we selected to augment the capabilities of NASA's Generalized Fluid System Simulation Program (GFSSP) software. GFSSP is a general-purpose computer program designed to calculate steady state and transient pressure and flow distributions in a complex fluid network. While the current version of the GFSSP code is able to handle various systems components the implicit assumption in the code is that the fluids in the system are Newtonian. To extend the capability of the code to non-Newtonian fluids, such as silica gelled fuels and oxidizers, modifications to the momentum equations of the code have been performed. We have successfully implemented in GFSSP flow equations for fluids with power law behavior. The implementation of the power law fluid behavior into the GFSSP code depends on knowledge of the two fluid coefficients, n and K. The determination of these parameters for the silica gels used in this program was performed experimentally. The n and K parameters for silica water gels were determined experimentally at CFDRC's Special Projects Laboratory, with a constant shear rate capillary viscometer. Batches of 8:1 (by weight) water-silica gel were mixed using CFDRC s 10-gallon gelled propellant mixer. Prior to testing the gel was allowed to rest in the rheometer tank for at least twelve hours to ensure that the delicate structure of the gel had sufficient time to reform. During the tests silica gel was pressure fed and discharged through stainless steel pipes ranging from 1", to 36", in length and three diameters; 0.0237", 0.032", and 0.047". The data collected in these tests included pressure at tube entrance and volumetric flowrate. From these data the uncorrected shear rate, shear stress, residence time

  5. Finite-difference simulation and visualization of elastodynamics in time-evolving generalized curvilinear coordinates

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K. (Inventor)

    2009-01-01

    Modeling and simulation of free and forced structural vibrations is essential to an overall structural health monitoring capability. In the various embodiments, a first principles finite-difference approach is adopted in modeling a structural subsystem such as a mechanical gear by solving elastodynamic equations in generalized curvilinear coordinates. Such a capability to generate a dynamic structural response is widely applicable in a variety of structural health monitoring systems. This capability (1) will lead to an understanding of the dynamic behavior of a structural system and hence its improved design, (2) will generate a sufficiently large space of normal and damage solutions that can be used by machine learning algorithms to detect anomalous system behavior and achieve a system design optimization and (3) will lead to an optimal sensor placement strategy, based on the identification of local stress maxima all over the domain.

  6. Simulating incompressible flow on moving meshfree grids using General Finite Differences (GFD)

    NASA Astrophysics Data System (ADS)

    Vasyliv, Yaroslav; Alexeev, Alexander

    2016-11-01

    We simulate incompressible flow around an oscillating cylinder at different Reynolds numbers using General Finite Differences (GFD) on a meshfree grid. We evolve the meshfree grid by treating each grid node as a particle. To compute velocities and accelerations, we consider the particles at a particular instance as Eulerian observation points. The incompressible Navier-Stokes equations are directly discretized using GFD with boundary conditions enforced using a sharp interface treatment. Cloud sizes are set such that the local approximations use only 16 neighbors. To enforce incompressibility, we apply a semi-implicit approximate projection method. To prevent overlapping particles and formation of voids in the grid, we propose a particle regularization scheme based on a local minimization principle. We validate the GFD results for an oscillating cylinder against the lattice Boltzmann method and find good agreement. Financial support provided by National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.

  7. Examining the Accuracy of Astrophysical Disk Simulations with a Generalized Hydrodynamical Test Problem

    NASA Astrophysics Data System (ADS)

    Raskin, Cody; Owen, J. Michael

    2016-11-01

    We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.

  8. Real time simulation of nonlinear generalized predictive control for wind energy conversion system with nonlinear observer.

    PubMed

    Ouari, Kamel; Rekioua, Toufik; Ouhrouche, Mohand

    2014-01-01

    In order to make a wind power generation truly cost-effective and reliable, an advanced control techniques must be used. In this paper, we develop a new control strategy, using nonlinear generalized predictive control (NGPC) approach, for DFIG-based wind turbine. The proposed control law is based on two points: NGPC-based torque-current control loop generating the rotor reference voltage and NGPC-based speed control loop that provides the torque reference. In order to enhance the robustness of the controller, a disturbance observer is designed to estimate the aerodynamic torque which is considered as an unknown perturbation. Finally, a real-time simulation is carried out to illustrate the performance of the proposed controller.

  9. Strong scaling of general-purpose molecular dynamics simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Glaser, Jens; Nguyen, Trung Dac; Anderson, Joshua A.; Lui, Pak; Spiga, Filippo; Millan, Jaime A.; Morse, David C.; Glotzer, Sharon C.

    2015-07-01

    We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, 2013). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al., 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5 ×.

  10. Intraseasonal eddies in the Sulawesi Sea simulated in an ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Masumoto, Y.; Kagimoto, T.; Yoshida, M.; Fukuda, M.; Hirose, N.; Yamagata, T.

    The intraseasonal variability associated with mesoscale eddies in the Sulawesi Sea simulated in a high resolution ocean general circulation model is described in detail. The cyclonic eddies, with a diameter of about 400 km, are generated at the entrance of the Sulawesi Sea between the Mindanao and the Halmahera Islands with 40 days interval. They are associated with a high speed (> 20 cm/s) down to 1000 m level. The anticlockwise circulation in the Sulawesi Sea, reported so far in both models and observations, may be a long time-averaged image of the above energetic eddies. The intraseasonal eddies significantly affect the volume transport through passages in the northern part of the Indonesian archipelago. The intraseasonal transport variation, however, is highly damped within the Indonesian seas in the present model.

  11. General Relativistic Simulations of Low-Mass Magnetized Binary Neutron Star Mergers

    NASA Astrophysics Data System (ADS)

    Giacomazzo, Bruno

    2017-01-01

    We will present general relativistic magnetohydrodynamic (GRMHD) simulations of binary neutron star (BNS) systems that produce long-lived neutron stars (NSs) after merger. While the standard scenario for short gamma-ray bursts (SGRBs) requires the formation after merger of a spinning black hole surrounded by an accretion disk, other theoretical models, such as the time-reversal scenario, predict the formation of a long-lived magnetar. The formation of a long-lived magnetar could in particular explain the X-ray plateaus that have been observed in some SGRBs. Moreover, observations of NSs with masses of 2 solar masses indicate that the equation of state of NS matter should support masses larger than that. Therefore a significant fraction of BNS mergers will produce long-lived NSs. This has important consequences both on the emission of gravitational wave signals and on their electromagnetic counterparts. We will discuss GRMHD simulations of ``low-mass'' magnetized BNS systems with different equations of state and mass ratios. We will describe the properties of their post-merger remnants and of their gravitational and electromagnetic emission.

  12. Secondary-structure preferences of force fields for proteins evaluated by generalized-ensemble simulations

    NASA Astrophysics Data System (ADS)

    Yoda, Takao; Sugita, Yuji; Okamoto, Yuko

    2004-12-01

    Secondary-structure forming tendencies are examined for six well-known protein force fields: AMBER94, AMBER96, AMBER99, CHARMM22, OPLS-AA/L, and GROMOS96. We performed generalized-ensemble molecular dynamics simulations of two peptides. One of these peptides is C-peptide of ribonuclease A, and the other is the C-terminal fragment from the B1 domain of streptococcal protein G. The former is known to form α-helix structure and the latter β-hairpin structure by experiments. The simulation results revealed significant differences of the secondary-structure forming tendencies among the force fields. Of the six force fields, the results of AMBER99 and CHARMM22 were in accord with experiments for C-peptide. For G-peptide, on the other hand, the results of OPLS-AA/L and GROMOS96 were most consistent with experiments. Therefore, further improvements on the force fields are necessary for studying the protein folding problem from the first principles, in which a single force field can be used for all cases.

  13. Evaluating Parameterizations in General Circulation Models: Climate Simulation Meets Weather Prediction

    SciTech Connect

    Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J

    2004-05-06

    To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands that the GCM parameterizations of unresolved processes, in particular, should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provided that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by a realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be tested in the same framework. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the U.S. Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM.

  14. General Force-Field Parametrization Scheme for Molecular Dynamics Simulations of Conjugated Materials in Solution

    PubMed Central

    2016-01-01

    We describe a general scheme to obtain force-field parameters for classical molecular dynamics simulations of conjugated polymers. We identify a computationally inexpensive methodology for calculation of accurate intermonomer dihedral potentials and partial charges. Our findings indicate that the use of a two-step methodology of geometry optimization and single-point energy calculations using DFT methods produces potentials which compare favorably to high level theory calculation. We also report the effects of varying the conjugated backbone length and alkyl side-chain lengths on the dihedral profiles and partial charge distributions and determine the existence of converged lengths above which convergence is achieved in the force-field parameter sets. We thus determine which calculations are required for accurate parametrization and the scope of a given parameter set for variations to a given molecule. We perform simulations of long oligomers of dioctylfluorene and hexylthiophene in explicit solvent and find peristence lengths and end-length distributions consistent with experimental values. PMID:27397762

  15. General relativistic simulations of black-hole-neutron-star mergers: Effects of magnetic fields

    NASA Astrophysics Data System (ADS)

    Etienne, Zachariah B.; Liu, Yuk Tung; Paschalidis, Vasileios; Shapiro, Stuart L.

    2012-03-01

    As a neutron star (NS) is tidally disrupted by a black hole (BH) companion at the end of a black-hole-neutron-star (BHNS) binary inspiral, its magnetic fields will be stretched and amplified. If sufficiently strong, these magnetic fields may impact the gravitational waveforms, merger evolution and mass of the remnant disk. Formation of highly-collimated magnetic field lines in the disk+spinning BH remnant may launch relativistic jets, providing the engine for a short-hard GRB. We analyze this scenario through fully general relativistic, magnetohydrodynamic BHNS simulations from inspiral through merger and disk formation. Different initial magnetic field configurations and strengths are chosen for the NS interior for both nonspinning and moderately spinning (aBH/MBH=0.75) BHs aligned with the orbital angular momentum. Only strong interior (Bmax⁡˜1017G) initial magnetic fields in the NS significantly influence merger dynamics, enhancing the remnant disk mass by 100% and 40% in the nonspinning and spinning BH cases, respectively. However, detecting the imprint of even a strong magnetic field may be challenging for Advanced LIGO. Though there is no evidence of mass outflows or magnetic field collimation during the preliminary simulations we have performed, higher resolution, coupled with longer disk evolutions and different initial magnetic field configurations, may be required to definitively assess the possibility of BHNS binaries as short-hard gamma-ray burst progenitors.

  16. Optimization of a general-purpose, actively scanned proton beamline for ocular treatments: Geant4 simulations.

    PubMed

    Piersimoni, Pierluigi; Rimoldi, Adele; Riccardi, Cristina; Pirola, Michele; Molinelli, Silvia; Ciocca, Mario

    2015-03-08

    The Italian National Center for Hadrontherapy (CNAO, Centro Nazionale di Adroterapia Oncologica), a synchrotron-based hospital facility, started the treatment of patients within selected clinical trials in late 2011 and 2012 with actively scanned proton and carbon ion beams, respectively. The activation of a new clinical protocol for the irradiation of uveal melanoma using the existing general-purpose proton beamline is foreseen for late 2014. Beam characteristics and patient treatment setup need to be tuned to meet the specific requirements for such a type of treatment technique. The aim of this study is to optimize the CNAO transport beamline by adding passive components and minimizing air gap to achieve the optimal conditions for ocular tumor irradiation. The CNAO setup with the active and passive components along the transport beamline, as well as a human eye-modeled detector also including a realistic target volume, were simulated using the Monte Carlo Geant4 toolkit. The strong reduction of the air gap between the nozzle and patient skin, as well as the insertion of a range shifter plus a patient-specific brass collimator at a short distance from the eye, were found to be effective tools to be implemented. In perspective, this simulation toolkit could also be used as a benchmark for future developments and testing purposes on commercial treatment planning systems.

  17. Comparing four approaches to generalized redirected walking: simulation and live user data.

    PubMed

    Hodgson, Eric; Bachmann, Eric

    2013-04-01

    Redirected walking algorithms imperceptibly rotate a virtual scene and scale movements to guide users of immersive virtual environment systems away from tracking area boundaries. These distortions ideally permit users to explore large and potentially unbounded virtual worlds while walking naturally through a physically limited space. Estimates of the physical space required to perform effective redirected walking have been based largely on the ability of humans to perceive the distortions introduced by redirected walking and have not examined the impact the overall steering strategy used. This work compares four generalized redirected walking algorithms, including Steer-to-Center, Steer-to-Orbit, Steer-to-Multiple-Targets and Steer-to-Multiple+Center. Two experiments are presented based on simulated navigation as well as live-user navigation carried out in a large immersive virtual environment facility. Simulations were conducted with both synthetic paths and previously-logged user data. Primary comparison metrics include mean and maximum distances from the tracking area center for each algorithm, number of wall contacts, and mean rates of redirection. Results indicated that Steer-to-Center out-performed all other algorithms relative to these metrics. Steer-to-Orbit also performed well in some circumstances.

  18. Large-eddy simulation of airflow and heat transfer in a general ward of hospital

    NASA Astrophysics Data System (ADS)

    Hasan, Md. Farhad; Himika, Taasnim Ahmed; Molla, Md. Mamun

    2016-07-01

    In this paper, a very popular alternative computational technique, the Lattice Boltzmann Method (LBM) has been used for Large-Eddy Simulation (LES) of airflow and heat transfer in general ward of hospital. Different Reynolds numbers have been used to study the airflow pattern. In LES, Smagorinsky turbulence model has been considered and a discussion has been conducted in brief. A code validation has been performed comparing the present results with benchmark results for lid-driven cavity problem and the results are found to agree very well. LBM is demonstrated through simulation in forced convection inside hospital ward with six beds with a partition in the middle, which acted like a wall. Changes in average rate of heat transfer in terms of average Nusselt numbers have also been recorded in tabular format and necessary comparison has been showed. It was found that partition narrowed the path for airflow and once the air overcame this barrier, it got free space and turbulence appeared. For higher turbulence, the average rate of heat transfer increased and patients near the turbulence zone released maximum heat and felt more comfortable.

  19. Application of the general thermal field model to simulate the behaviour of nanoscale Cu field emitters

    SciTech Connect

    Eimre, Kristjan; Aabloo, Alvo; Parviainen, Stefan Djurabekova, Flyura; Zadin, Vahur

    2015-07-21

    Strong field electron emission from a nanoscale tip can cause a temperature rise at the tip apex due to Joule heating. This becomes particularly important when the current value grows rapidly, as in the pre-breakdown (the electrostatic discharge) condition, which may occur near metal surfaces operating under high electric fields. The high temperatures introduce uncertainties in calculations of the current values when using the Fowler–Nordheim equation, since the thermionic component in such conditions cannot be neglected. In this paper, we analyze the field electron emission currents as the function of the applied electric field, given by both the conventional Fowler–Nordheim field emission and the recently developed generalized thermal field emission formalisms. We also compare the results in two limits: discrete (atomistic simulations) and continuum (finite element calculations). The discrepancies of both implementations and their effect on final results are discussed. In both approaches, the electric field, electron emission currents, and Joule heating processes are simulated concurrently and self-consistently. We show that the conventional Fowler–Nordheim equation results in significant underestimation of electron emission currents. We also show that Fowler–Nordheim plots used to estimate the field enhancement factor may lead to significant overestimation of this parameter especially in the range of relatively low electric fields.

  20. TOUGH2: A general-purpose numerical simulator for multiphase nonisothermal flows

    SciTech Connect

    Pruess, K.

    1991-06-01

    Numerical simulators for multiphase fluid and heat flows in permeable media have been under development at Lawrence Berkeley Laboratory for more than 10 yr. Real geofluids contain noncondensible gases and dissolved solids in addition to water, and the desire to model such `compositional` systems led to the development of a flexible multicomponent, multiphase simulation architecture known as MULKOM. The design of MULKOM was based on the recognition that the mass-and energy-balance equations for multiphase fluid and heat flows in multicomponent systems have the same mathematical form, regardless of the number and nature of fluid components and phases present. Application of MULKOM to different fluid mixtures, such as water and air, or water, oil, and gas, is possible by means of appropriate `equation-of-state` (EOS) modules, which provide all thermophysical and transport parameters of the fluid mixture and the permeable medium as a function of a suitable set of primary thermodynamic variables. Investigations of thermal and hydrologic effects from emplacement of heat-generating nuclear wastes into partially water-saturated formations prompted the development and release of a specialized version of MULKOM for nonisothermal flow of water and air, named TOUGH. TOUGH is an acronym for `transport of unsaturated groundwater and heat` and is also an allusion to the tuff formations at Yucca Mountain, Nevada. The TOUGH2 code is intended to supersede TOUGH. It offers all the capabilities of TOUGH and includes a considerably more general subset of MULKOM modules with added capabilities. The paper briefly describes the simulation methodology and user features.

  1. Using Simulations of Black Holes to Study General Relativity and the Properties of Inner Accretion Flow

    NASA Astrophysics Data System (ADS)

    Hoormann, Janie Katherine

    2016-06-01

    While Albert Einstein's theory of General Relativity (GR) has been tested extensively in our solar system, it is just beginning to be tested in the strong gravitational fields that surround black holes. As a way to study the behavior of gravity in these extreme environments, I have used and added to a ray-tracing code that simulates the X-ray emission from the accretion disks surrounding black holes. In particular, the observational channels which can be simulated include the thermal and reflected spectra, polarization, and reverberation signatures. These calculations can be performed assuming GR as well as four alternative spacetimes. These results can be used to see if it is possible to determine if observations can test the No-Hair theorem of GR which states that stationary, astrophysical black holes are only described by their mass and spin. Although it proves difficult to distinguish between theories of gravity, it is possible to exclude a large portion of the possible deviations from GR using observations of rapidly spinning stellar mass black holes such as Cygnus X-1. The ray-tracing simulations can furthermore be used to study the inner regions of black hole accretion flows. I examined the dependence of X-ray reverberation observations on the ionization of the disk photosphere. My results show that X-ray reverberation and X-ray polarization provides a powerful tool to constrain the geometry of accretion disks which are too small to be imaged directly. The second part of my thesis describes the work on the balloon-borne X-Calibur hard X-ray polarimetry mission and on the space-borne PolSTAR polarimeter concept.

  2. Evaluation of a Mineral Dust Simulation in the Atmospheric-Chemistry General Circulation Model-EMAC

    NASA Astrophysics Data System (ADS)

    Abdel Kader, M.; Astitha, M.; Lelieveld, J.

    2012-04-01

    This study presents an evaluation of the atmospheric mineral dust cycle in the Atmospheric Chemistry General Circulation Model (AC-GCM) using new developed dust emissions scheme. The dust cycle, as an integral part of the Earth System, plays an important role in the Earth's energy balance by both direct and indirect ways. As an aerosol, it significantly impacts the absorption and scattering of radiation in the atmosphere and can modify the optical properties of clouds and snow/ice surfaces. In addition, dust contributes to a range of physical, chemical and bio-geological processes that interact with the cycles of carbon and water. While our knowledge of the dust cycle, its impacts and interactions with the other global-scale bio-geochemical cycles has greatly advanced in the last decades, large uncertainties and knowledge gaps still exist. Improving the dust simulation in global models is essential to minimize the uncertainties in the model results related to dust. In this study, the results are based on the ECHAM5 Modular Earth Submodel System (MESSy) AC-GCM simulations using T106L31 spectral resolution (about 120km ) with 31 vertical levels. The GMXe aerosol submodel is used to simulate the phase changes of the dust particles between soluble and insoluble modes. Dust emission, transport and deposition (wet and dry) are calculated on-line along with the meteorological parameters in every model time step. The preliminary evaluation of the dust concentration and deposition are presented based on ground observations from various campaigns as well as the evaluation of the optical properties of dust using AERONET and satellite (MODIS and MISR) observations. Preliminarily results show good agreement with observations for dust deposition and optical properties. In addition, the global dust emissions, load, deposition and lifetime is in good agreement with the published results. Also, the uncertainties in the dust cycle that contribute to the overall model performance

  3. GENERAL RELATIVISTIC SIMULATIONS OF MAGNETIZED PLASMAS AROUND MERGING SUPERMASSIVE BLACK HOLES

    SciTech Connect

    Giacomazzo, Bruno; Baker, John G.; Van Meter, James R.; Coleman Miller, M.; Reynolds, Christopher S.

    2012-06-10

    Coalescing supermassive black hole binaries are produced by the mergers of galaxies and are the most powerful sources of gravitational waves accessible to space-based gravitational observatories. Some such mergers may occur in the presence of matter and magnetic fields and hence generate an electromagnetic counterpart. In this Letter, we present the first general relativistic simulations of magnetized plasma around merging supermassive black holes using the general relativistic magnetohydrodynamic code Whisky. By considering different magnetic field strengths, going from non-magnetically dominated to magnetically dominated regimes, we explore how magnetic fields affect the dynamics of the plasma and the possible emission of electromagnetic signals. In particular, we observe a total amplification of the magnetic field of {approx}2 orders of magnitude, which is driven by the accretion onto the binary and that leads to much stronger electromagnetic signals, more than a factor of 10{sup 4} larger than comparable calculations done in the force-free regime where such amplifications are not possible.

  4. The generalized Onsager model and DSMC simulations of high-speed rotating flows with product and waste baffles

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev, , Dr.

    2017-01-01

    The generalized Onsager model for the radial boundary layer and of the generalized Carrier-Maslen model for the axial boundary layer in a high-speed rotating cylinder, are extended to a multiply connected domain, created by the product and waste baffles. For a single component gas, the analytical solutions are obtained for the sixth-order generalized Onsager equations for the master potential, and for the fourth-order generalized Carrier-Maslen equation for the velocity potential. In both cases, the equations are linearized in the perturbation to the base flow, which is a solid-body rotation. An explicit expression for the baffle stream function is obtained using the boundary layer solutions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement between the analysis and simulations, to within 15%, provided the wall-slip in both the flow velocity and temperature are incorporated in the analytical solutions.

  5. The generalized Onsager model and DSMC simulations of high-speed rotating flows with product and waste baffles

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev, , Dr.

    2016-11-01

    The generalized Onsager model for the radial boundary layer and of the generalized Carrier-Maslen model for the axial boundary layer in a high-speed rotating cylinder, are extended to a multiply connected domain, created by the product and waste baffles. For a single component gas, the analytical solutions are obtained for the sixth-order generalized Onsager equations for the master potential, and for the fourth-order generalized Carrier-Maslen equation for the velocity potential. In both cases, the equations are linearized in the perturbation to the base flow, which is a solid-body rotation. An explicit expression for the baffle stream function is obtained using the boundary layer solutions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement between the analysis and simulations, to within 15%, provided the wall-slip in both the flow velocity and temperature are incorporated in the analytical solutions.

  6. The generalized Onsager model and DSMC simulations of high-speed rotating flows with product and waste baffles

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev

    2016-10-01

    The generalized Onsager model for the radial boundary layer and of the generalized Carrier-Maslen model for the axial boundary layer in a high-speed rotating cylinder, are extended to a multiply connected domain, created by the product and waste baffles. For a single component gas, the analytical solutions are obtained for the sixth-order generalized Onsager equations for the master potential, and for the fourth-order generalized Carrier-Maslen equation for the velocity potential. In both cases, the equations are linearized in the perturbation to the base flow, which is a solid-body rotation. An explicit expression for the baffle stream function is obtained using the boundary layer solutions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement between the analysis and simulations, to within 15%, provided the wall-slip in both the flow velocity and temperature are incorporated in the analytical solutions.

  7. KMCLib: A general framework for lattice kinetic Monte Carlo (KMC) simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2014-09-01

    KMCLib is a general framework for lattice kinetic Monte Carlo (KMC) simulations. The program can handle simulations of the diffusion and reaction of millions of particles in one, two, or three dimensions, and is designed to be easily extended and customized by the user to allow for the development of complex custom KMC models for specific systems without having to modify the core functionality of the program. Analysis modules and on-the-fly elementary step diffusion rate calculations can be implemented as plugins following a well-defined API. The plugin modules are loosely coupled to the core KMCLib program via the Python scripting language. KMCLib is written as a Python module with a backend C++ library. After initial compilation of the backend library KMCLib is used as a Python module; input to the program is given as a Python script executed using a standard Python interpreter. We give a detailed description of the features and implementation of the code and demonstrate its scaling behavior and parallel performance with a simple one-dimensional A-B-C lattice KMC model and a more complex three-dimensional lattice KMC model of oxygen-vacancy diffusion in a fluorite structured metal oxide. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Catalogue identifier: AESZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 49 064 No. of bytes in distributed program, including test data, etc.: 1 575 172 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer that can run a C++ compiler and a Python interpreter. Operating system: Tested on Ubuntu 12

  8. Studies of molecular docking between fibroblast growth factor and heparin using generalized simulated annealing

    NASA Astrophysics Data System (ADS)

    Pita, Samuel Silva Da Rocha; Fernandes, Tácio Vinício Amorim; Caffarena, Ernesto Raul; Pascutti, Pedro Geraldo

    Since the middle 70s, the main molecular docking problem consists in limitations to treat adequately the degrees of freedom of protein (or a receptor) due to the energy landscape roughness and the high computational cost. Until recently, only few algorithms considering flexible simultaneously both ligand and receptor at low computational cost were developed. As a recent proposed Statistical Mechanics, generalized simulated annealing (GSA) has been employed at diverse works concerning global optimization problems. In this work, we used this method exploring the molecular docking problem taking into account the FGF-2 and heparin complex. Since the requirements of an efficient docking algorithm are accuracy and velocity, we tested the influence of GSA parameters qA (new configuration acceptance index), qV (energy surface visiting index), and qT (temperature decreasing control) on the performance of GSADOCK program. Our simulations showed that as temperature parameter qT increases, qA parameter follows this behavior in the interval ranging from 1.1 to 2.3. We found that the GSA parameters have the best performance for the qA values ranging from 1.1 to 1.3, qV values from 1.3 to 1.5, and qT values from 1.1 to 1.7. Most of good qV values were equal or next the good qT values. Finally, the implemented algorithm is trustworthy and can be employed as a tool of molecular modeling methods. The final version of the program will be free of charge and will be accessible at our home-page or could be requested to the authors for e-mail.

  9. Assessment of Atmosphere-Ocean General Circulation Model Simulations of Winter Northern Hemisphere Atmospheric Blocking

    NASA Astrophysics Data System (ADS)

    Vial, Jessica; Osborn, Tim

    2010-05-01

    Characterized by their persistence and quasi-stationary features, large-scale atmospheric blocking are often responsible for extreme weather events, which can have enormous impacts on human life, economy and environment e.g. European heat wave in summer 2003. Therefore, diagnostics of the present-day climate and future projections of potential changes in blocking-related extreme events are essential for risk management and adaptation planning. This study focuses on assessing the ability of six coupled Atmosphere-Ocean General Circulation Models (AOGCMs) to simulate large-scale winter atmospheric blocking in the Northern Hemisphere for the present-day climate (1957-1999). A modified version of the Tibaldi and Molteni (1990)'s blocking index, which measures the strength of the average westerly flow in the mid-latitudes, is applied to daily averaged 500 hPa geopotential height output from the climate models. ERA-40 re-analysis atmospheric data have also been used over the same time period to verify the models' results. The two preferred regions of blocking development, in the Euro-Atlantic and North Pacific, are well captured by most of the models. However, the prominent error in blocking simulations, according to a number of previous model assessments, consists of an underestimation of the total frequency of blocking episodes over both regions. A more detailed analysis of blocking frequency as a function of duration revealed that this error was due to an insufficient number of medium spells and long-lasting episodes, and a shift in blocking lifetime distributions towards shorter blocks, while short-lived blocking events (between 5 and 8 days) tend to be overestimated. The impact of models' systematic errors on blocking simulations has been analyzed, and results suggest that there is a primary need to reduce the time-mean bias to improve the representation of blocking in climate models. The underestimated high-frequency variability of the transient eddies embedded in

  10. Generalized Scalable Multiple Copy Algorithms for Molecular Dynamics Simulations in NAMD

    PubMed Central

    Jiang, Wei; Phillips, James C.; Huang, Lei; Fajer, Mikolai; Meng, Yilin; Gumbart, James C.; Luo, Yun; Schulten, Klaus; Roux, Benoît

    2014-01-01

    Computational methodologies that couple the dynamical evolution of a set of replicated copies of a system of interest offer powerful and flexible approaches to characterize complex molecular processes. Such multiple copy algorithms (MCAs) can be used to enhance sampling, compute reversible work and free energies, as well as refine transition pathways. Widely used examples of MCAs include temperature and Hamiltonian-tempering replica-exchange molecular dynamics (T-REMD and H-REMD), alchemical free energy perturbation with lambda replica-exchange (FEP/λ-REMD), umbrella sampling with Hamiltonian replica exchange (US/H-REMD), and string method with swarms-of-trajectories conformational transition pathways. Here, we report a robust and general implementation of MCAs for molecular dynamics (MD) simulations in the highly scalable program NAMD built upon the parallel programming system Charm++. Multiple concurrent NAMD instances are launched with internal partitions of Charm++ and located continuously within a single communication world. Messages between NAMD instances are passed by low-level point-to-point communication functions, which are accessible through NAMD’s Tcl scripting interface. The communication-enabled Tcl scripting provides a sustainable application interface for end users to realize generalized MCAs without modifying the source code. Illustrative applications of MCAs with fine-grained inter-copy communication structure, including global lambda exchange in FEP/λ-REMD, window swapping US/H-REMD in multidimensional order parameter space, and string method with swarms-of-trajectories were carried out on IBM Blue Gene/Q to demonstrate the versatility and massive scalability of the present implementation. PMID:24944348

  11. Wind driven general circulation of the Mediterranean Sea simulated with a Spectral Element Ocean Model

    NASA Astrophysics Data System (ADS)

    Molcard, A.; Pinardi, N.; Iskandarani, M.; Haidvogel, D. B.

    2002-05-01

    This work is an attempt to simulate the Mediterranean Sea general circulation with a Spectral Finite Element Model. This numerical technique associates the geometrical flexibility of the finite elements for the proper coastline definition with the precision offered by spectral methods. The model is reduced gravity and we study the wind-driven ocean response in order to explain the large scale sub-basin gyres and their variability. The study period goes from January 1987 to December 1993 and two forcing data sets are used. The effect of wind variability in space and time is analyzed and the relationship between wind stress curl and ocean response is stressed. Some of the main permanent structures of the general circulation (Gulf of Lions cyclonic gyre, Rhodes gyre, Gulf of Syrte anticylone) are shown to be induced by permanent wind stress curl structures. The magnitude and spatial variability of the wind is important in determining the appearance or disappearance of some gyres (Tyrrhenian anticyclonic gyre, Balearic anticyclonic gyre, Ionian cyclonic gyre). An EOF analysis of the seasonal variability indicates that the weakening and strengthening of the Levantine basin boundary currents is a major component of the seasonal cycle in the basin. The important discovery is that seasonal and interannual variability peak at the same spatial scales in the ocean response and that the interannual variability includes the change in amplitude and phase of the seasonal cycle in the sub-basin scale gyres and boundary currents. The Coriolis term in the vorticity balance seems to be responsible for the weakening of anticyclonic structures and their total disappearance when they are close to a boundary. The process of adjustment to winds produces a train of coastally trapped gravity waves which travel around the eastern and western basins, respectively in approximately 6 months. This corresponds to a phase velocity for the wave of about 1 m/s, comparable to an average velocity of

  12. Sleep promotes consolidation and generalization of extinction learning in simulated exposure therapy for spider fear.

    PubMed

    Pace-Schott, Edward F; Verga, Patrick W; Bennett, Tobias S; Spencer, Rebecca M C

    2012-08-01

    Simulated exposure therapy for spider phobia served as a clinically naturalistic model to study effects of sleep on extinction. Spider-fearing, young adult women (N = 66), instrumented for skin conductance response (SCR), heart rate acceleration (HRA) and corrugator electromyography (EMG), viewed 14 identical 1-min videos of a behaving spider before a 12-hr delay containing a normal night's Sleep (N = 20) or continuous daytime Wake (N = 23), or a 2-hr delay of continuous wake in the Morning (N = 11) or Evening (N = 12). Following the delay, all groups viewed this same video 6 times followed by six 1-min videos of a novel spider. After each video, participants rated disgust, fearfulness and unpleasantness. In all 4 groups, all measures except corrugator EMG diminished across Session 1 (extinction learning) and, excepting SCR to a sudden noise, increased from the old to novel spider in Session 2. In Wake only, summed subjective ratings and SCR to the old spider significantly increased across the delay (extinction loss) and were greater for the novel vs. the old spider when it was equally novel at the beginning of Session 1 (sensitization). In Sleep only, SCR to a sudden noise decreased across the inter-session delay (extinction augmentation) and, along with HRA, was lower to the novel spider than initially to the old spider in Session 1 (extinction generalization). None of the above differentiated Morning and Evening groups suggesting that intervening sleep, rather than time-of-testing, produced differences between Sleep and Wake. Thus, sleep following exposure therapy may promote retention and generalization of extinction learning.

  13. Sleep Promotes Consolidation and Generalization of Extinction Learning in Simulated Exposure Therapy for Spider Fear

    PubMed Central

    Pace-Schott, Edward F.; Verga, Patrick; Bennet, Tobias; Spencer, Rebecca M.C.

    2012-01-01

    Simulated exposure therapy for spider phobia served as a clinically naturalistic model to study effects of sleep on extinction. Spider-fearing, young adult women (N=66), instrumented for skin conductance response (SCR), heart rate acceleration (HRA) and corrugator electromyography (EMG), viewed 14 identical 1-min videos of a behaving spider before a 12-hr delay containing a normal night’s Sleep (N=20) or continuous daytime Wake (N=23), or a 2-hr delay of continuous wake in the Morning (N=11) or Evening (N=12). Following the delay, all groups viewed this same video 6 times followed by six 1-min videos of a novel spider. After each video, participants rated disgust, fearfulness and unpleasantness. In all 4 groups, all measures except corrugator EMG diminished across Session 1 (extinction learning) and, excepting SCR to a sudden noise, increased from the old to novel spider in Session 2. In Wake only, summed subjective ratings and SCR to the old spider significantly increased across the delay (extinction loss) and were greater for the novel vs. the old spider when it was equally novel at the beginning of Session 1 (sensitization). In Sleep only, SCR to a sudden noise decreased across the inter-session delay (extinction augmentation) and, along with HRA, was lower to the novel spider than initially to the old spider in Session 1 (extinction generalization). None of the above differentiated Morning and Evening groups suggesting that intervening sleep, rather than time-of-testing, produced differences between Sleep and Wake. Thus, sleep following exposure therapy may promote retention and generalization of extinction learning. PMID:22578824

  14. Hidden Conformation Events in DNA Base Extrusions: A Generalized Ensemble Path Optimization and Equilibrium Simulation Study

    PubMed Central

    Cao, Liaoran; Lv, Chao; Yang, Wei

    2013-01-01

    DNA base extrusion is a crucial component of many biomolecular processes. Elucidating how bases are selectively extruded from the interiors of double-strand DNAs is pivotal to accurately understanding and efficiently sampling this general type of conformational transitions. In this work, the on-the-path random walk (OTPRW) method, which is the first generalized ensemble sampling scheme designed for finite-temperature-string path optimizations, was improved and applied to obtain the minimum free energy path (MFEP) and the free energy profile of a classical B-DNA major-groove base extrusion pathway. Along the MFEP, an intermediate state and the corresponding transition state were located and characterized. The MFEP result suggests that a base-plane-elongation event rather than the commonly focused base-flipping event is dominant in the transition state formation portion of the pathway; and the energetic penalty at the transition state is mainly introduced by the stretching of the Watson-Crick base pair. Moreover to facilitate the essential base-plane-elongation dynamics, the surrounding environment of the flipped base needs to be intimately involved. Further taking the advantage of the extended-dynamics nature of the OTPRW Hamiltonian, an equilibrium generalized ensemble simulation was performed along the optimized path; and based on the collected samples, several base-flipping (opening) angle collective variables were evaluated. In consistence with the MFEP result, the collective variable analysis result reveals that none of these commonly employed flipping (opening) angles alone can adequately represent the base extrusion pathway, especially in the pre-transition-state portion. As further revealed by the collective variable analysis, the base-pairing partner of the extrusion target undergoes a series of in-plane rotations to facilitate the base-plane-elongation dynamics. A base-plane rotation angle is identified to be a possible reaction coordinate to represent

  15. Merger of white dwarf-neutron star binaries: Prelude to hydrodynamic simulations in general relativity

    SciTech Connect

    Paschalidis, Vasileios; MacLeod, Morgan; Baumgarte, Thomas W.; Shapiro, Stuart L.

    2009-07-15

    White dwarf-neutron star binaries generate detectable gravitational radiation. We construct Newtonian equilibrium models of corotational white dwarf-neutron star (WDNS) binaries in circular orbit and find that these models terminate at the Roche limit. At this point the binary will undergo either stable mass transfer (SMT) and evolve on a secular time scale, or unstable mass transfer (UMT), which results in the tidal disruption of the WD. The path a given binary will follow depends primarily on its mass ratio. We analyze the fate of known WDNS binaries and use population synthesis results to estimate the number of LISA-resolved galactic binaries that will undergo either SMT or UMT. We model the quasistationary SMT epoch by solving a set of simple ordinary differential equations and compute the corresponding gravitational waveforms. Finally, we discuss in general terms the possible fate of binaries that undergo UMT and construct approximate Newtonian equilibrium configurations of merged WDNS remnants. We use these configurations to assess plausible outcomes of our future, fully relativistic simulations of these systems. If sufficient WD debris lands on the NS, the remnant may collapse, whereby the gravitational waves from the inspiral, merger, and collapse phases will sweep from LISA through LIGO frequency bands. If the debris forms a disk about the NS, it may fragment and form planets.

  16. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  17. GENERAL RELATIVISTIC SIMULATIONS OF ACCRETION INDUCED COLLAPSE OF NEUTRON STARS TO BLACK HOLES

    SciTech Connect

    Giacomazzo, Bruno; Perna, Rosalba

    2012-10-10

    Neutron stars (NSs) in the astrophysical universe are often surrounded by accretion disks. Accretion of matter onto an NS may increase its mass above the maximum value allowed by its equation of state, inducing its collapse to a black hole (BH). Here we study this process for the first time, in three-dimensions, and in full general relativity. By considering three initial NS configurations, each with and without a surrounding disk (of mass {approx}7% M{sub NS}), we investigate the effect of the accretion disk on the dynamics of the collapse and its imprint on both the gravitational wave (GW) and electromagnetic (EM) signals that can be emitted by these sources. We show in particular that, even if the GW signal is similar for the accretion induced collapse (AIC) and the collapse of an NS in vacuum (and detectable only for Galactic sources), the EM counterpart could allow us to discriminate between these two types of events. In fact, our simulations show that, while the collapse of an NS in vacuum leaves no appreciable baryonic matter outside the event horizon, an AIC is followed by a phase of rapid accretion of the surviving disk onto the newly formed BH. The post-collapse accretion rates, on the order of {approx}10{sup -2} M{sub Sun} s{sup -1}, make these events tantalizing candidates as engines of short gamma-ray bursts.

  18. A general hybrid radiation transport scheme for star formation simulations on an adaptive grid

    SciTech Connect

    Klassen, Mikhail; Pudritz, Ralph E.; Kuiper, Rolf; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-10

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  19. A General Hybrid Radiation Transport Scheme for Star Formation Simulations on an Adaptive Grid

    NASA Astrophysics Data System (ADS)

    Klassen, Mikhail; Kuiper, Rolf; Pudritz, Ralph E.; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-01

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  20. Global General Relativistic Magnetohydrodynamic Simulations of Black Hole Accretion Flows: A Convergence Study

    NASA Astrophysics Data System (ADS)

    Shiokawa, Hotaka; Dolence, Joshua C.; Gammie, Charles F.; Noble, Scott C.

    2012-01-01

    Global, general relativistic magnetohydrodynamic (GRMHD) simulations of non-radiative, magnetized disks are widely used to model accreting black holes. We have performed a convergence study of GRMHD models computed with HARM3D. The models span a factor of four in linear resolution, from 96 × 96 × 64 to 384 × 384 × 256. We consider three diagnostics of convergence: (1) dimensionless shell-averaged quantities such as plasma β (2) the azimuthal correlation length of fluid variables; and (3) synthetic spectra of the source including synchrotron emission, absorption, and Compton scattering. Shell-averaged temperature is, except for the lowest resolution run, nearly independent of resolution; shell-averaged plasma β decreases steadily with resolution but shows signs of convergence. The azimuthal correlation lengths of density, internal energy, and temperature decrease steadily with resolution but show signs of convergence. In contrast, the azimuthal correlation length of magnetic field decreases nearly linearly with grid size. We argue by analogy with local models, however, that convergence should be achieved with another factor of two in resolution. Synthetic spectra are, except for the lowest resolution run, nearly independent of resolution. The convergence behavior is consistent with that of higher physical resolution local model ("shearing box") calculations and with the recent non-relativistic global convergence studies of Hawley et al.

  1. Generalized event-chain Monte Carlo: constructing rejection-free global-balance algorithms from infinitesimal steps.

    PubMed

    Michel, Manon; Kapfer, Sebastian C; Krauth, Werner

    2014-02-07

    In this article, we present an event-driven algorithm that generalizes the recent hard-sphere event-chain Monte Carlo method without introducing discretizations in time or in space. A factorization of the Metropolis filter and the concept of infinitesimal Monte Carlo moves are used to design a rejection-free Markov-chain Monte Carlo algorithm for particle systems with arbitrary pairwise interactions. The algorithm breaks detailed balance, but satisfies maximal global balance and performs better than the classic, local Metropolis algorithm in large systems. The new algorithm generates a continuum of samples of the stationary probability density. This allows us to compute the pressure and stress tensor as a byproduct of the simulation without any additional computations.

  2. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    PubMed

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies.

  3. General Relativistic Magnetohydrodynamic Simulations of Magnetically Choked Accretion Flows around Black Holes

    SciTech Connect

    McKinney, Jonathan C.; Tchekhovskoy, Alexander; Blandford, Roger D.

    2012-04-26

    Black hole (BH) accretion flows and jets are qualitatively affected by the presence of ordered magnetic fields. We study fully three-dimensional global general relativistic magnetohydrodynamic (MHD) simulations of radially extended and thick (height H to cylindrical radius R ratio of |H/R| {approx} 0.2-1) accretion flows around BHs with various dimensionless spins (a/M, with BH mass M) and with initially toroidally-dominated ({phi}-directed) and poloidally-dominated (R-z directed) magnetic fields. Firstly, for toroidal field models and BHs with high enough |a/M|, coherent large-scale (i.e. >> H) dipolar poloidal magnetic flux patches emerge, thread the BH, and generate transient relativistic jets. Secondly, for poloidal field models, poloidal magnetic flux readily accretes through the disk from large radii and builds-up to a natural saturation point near the BH. While models with |H/R| {approx} 1 and |a/M| {le} 0.5 do not launch jets due to quenching by mass infall, for sufficiently high |a/M| or low |H/R| the polar magnetic field compresses the inflow into a geometrically thin highly non-axisymmetric 'magnetically choked accretion flow' (MCAF) within which the standard linear magneto-rotational instability is suppressed. The condition of a highly-magnetized state over most of the horizon is optimal for the Blandford-Znajek mechanism that generates persistent relativistic jets with and 100% efficiency for |a/M| {approx}> 0.9. A magnetic Rayleigh-Taylor and Kelvin-Helmholtz unstable magnetospheric interface forms between the compressed inflow and bulging jet magnetosphere, which drives a new jet-disk oscillation (JDO) type of quasi-periodic oscillation (QPO) mechanism. The high-frequency QPO has spherical harmonic |m| = 1 mode period of {tau} {approx} 70GM/c{sup 3} for a/M {approx} 0.9 with coherence quality factors Q {approx}> 10. Overall, our models are qualitatively distinct from most prior MHD simulations (typically, |H/R| << 1 and poloidal flux is limited by

  4. A generalized Ising model for studying alloy evolution under irradiation and its use in kinetic Monte Carlo simulations.

    PubMed

    Huang, Chen-Hsi; Marian, Jaime

    2016-10-26

    We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term 'ABVI', incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found.

  5. A generalized Ising model for studying alloy evolution under irradiation and its use in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Huang, Chen-Hsi; Marian, Jaime

    2016-10-01

    We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term ‘ABVI’, incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found.

  6. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  7. Limits to high-speed simulations of spiking neural networks using general-purpose computers

    PubMed Central

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite. PMID:25309418

  8. FULLY GENERAL RELATIVISTIC SIMULATIONS OF CORE-COLLAPSE SUPERNOVAE WITH AN APPROXIMATE NEUTRINO TRANSPORT

    SciTech Connect

    Kuroda, Takami; Kotake, Kei; Takiwaki, Tomoya

    2012-08-10

    We present results from the first generation of multi-dimensional hydrodynamic core-collapse simulations in full general relativity (GR) that include an approximate treatment of neutrino transport. Using an M1 closure scheme with an analytic variable Eddington factor, we solve the energy-independent set of radiation energy and momentum based on the Thorne's momentum formalism. Our newly developed code is designed to evolve the Einstein field equation together with the GR radiation hydrodynamic equations. We follow the dynamics starting from the onset of gravitational core collapse of a 15 M{sub Sun} star, through bounce, up to about 100 ms postbounce in this study. By computing four models that differ according to 1D to 3D and by switching from special relativistic (SR) to GR hydrodynamics, we study how the spacial multi-dimensionality and GR would affect the dynamics in the early postbounce phase. Our 3D results support the anticipation in previous 1D results that the neutrino luminosity and average neutrino energy of any neutrino flavor in the postbounce phase increase when switching from SR to GR hydrodynamics. This is because the deeper gravitational well of GR produces more compact core structures, and thus hotter neutrino spheres at smaller radii. By analyzing the residency timescale to the neutrino-heating timescale in the gain region, we show that the criterion to initiate neutrino-driven explosions can be most easily satisfied in 3D models, irrespective of SR or GR hydrodynamics. Our results suggest that the combination of GR and 3D hydrodynamics provides the most favorable condition to drive a robust neutrino-driven explosion.

  9. A Generalized Fluid System Simulation Program to Model Flow Distribution in Fluid Networks

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Bailey, John W.; Schallhorn, Paul; Steadman, Todd

    1998-01-01

    This paper describes a general purpose computer program for analyzing steady state and transient flow in a complex network. The program is capable of modeling phase changes, compressibility, mixture thermodynamics and external body forces such as gravity and centrifugal. The program's preprocessor allows the user to interactively develop a fluid network simulation consisting of nodes and branches. Mass, energy and specie conservation equations are solved at the nodes; the momentum conservation equations are solved in the branches. The program contains subroutines for computing "real fluid" thermodynamic and thermophysical properties for 33 fluids. The fluids are: helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, parahydrogen, water, kerosene (RP-1), isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, R-11, R-12, R-22, R-32, R-123, R-124, R-125, R-134A, R-152A, nitrogen trifluoride and ammonia. The program also provides the options of using any incompressible fluid with constant density and viscosity or ideal gas. Seventeen different resistance/source options are provided for modeling momentum sources or sinks in the branches. These options include: pipe flow, flow through a restriction, non-circular duct, pipe flow with entrance and/or exit losses, thin sharp orifice, thick orifice, square edge reduction, square edge expansion, rotating annular duct, rotating radial duct, labyrinth seal, parallel plates, common fittings and valves, pump characteristics, pump power, valve with a given loss coefficient, and a Joule-Thompson device. The system of equations describing the fluid network is solved by a hybrid numerical method that is a combination of the Newton-Raphson and successive substitution methods. This paper also illustrates the application and verification of the code by comparison with Hardy Cross method for steady state flow and analytical solution for unsteady flow.

  10. DSIM: A distributed simulator

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Iyer, Ravishankar K.

    1990-01-01

    Discrete event-driven simulation makes it possible to model a computer system in detail. However, such simulation models can require a significant time to execute. This is especially true when modeling large parallel or distributed systems containing many processors and a complex communication network. One solution is to distribute the simulation over several processors. If enough parallelism is achieved, large simulation models can be efficiently executed. This study proposes a distributed simulator called DSIM which can run on various architectures. A simulated test environment is used to verify and characterize the performance of DSIM. The results of the experiments indicate that speedup is application-dependent and, in DSIM's case, is also dependent on how the simulation model is distributed among the processors. Furthermore, the experiments reveal that the communication overhead of ethernet-based distributed systems makes it difficult to achieve reasonable speedup unless the simulation model is computation bound.

  11. Accuracy of highly sexually active gay and bisexual men's predictions of their daily likelihood of anal sex and its relevance for intermittent event-driven HIV Pre-Exposure Prophylaxis

    PubMed Central

    Parsons, Jeffrey T.; Rendina, H. Jonathon; Grov, Christian; Ventuneac, Ana; Mustanski, Brian

    2014-01-01

    Objective We sought to examine highly sexually active gay and bisexual men's accuracy in predicting their sexual behavior for the purposes of informing future research on intermittent, event-driven HIV Pre-Exposure Prophylaxis (PrEP). Design For 30 days, 92 HIV-negative men completed a daily survey about their sexual behavior (n = 1,688 days of data) and indicated their likelihood of having anal sex with a casual male partner the following day. Method We utilized multilevel modeling to analyze the association between self-reported likelihood of and subsequent engagement in anal sex. Results We found a linear association between men's reported likelihood of anal sex with casual partners and the actual probability of engaging in sex, though men overestimated the likelihood of sex. Overall, we found that men were better at predicting when they would not have sex than when they would, particularly if any likelihood value greater than 0% was treated as indicative that sex might occur. We found no evidence that men's accuracy of prediction was affected by whether it was a weekend or whether they were using substances, though both did increase the probability of sex. Discussion These results suggested that, were men taking event-driven intermittent PrEP, 14% of doses could have been safely skipped with a minimal rate of false negatives using guidelines of taking a dose unless there was no chance (i.e., 0% likelihood) of sex on the following day. This would result in a savings of over $1,300 per year in medication costs per participant. PMID:25559594

  12. Use of a PhET Interactive Simulation in General Chemistry Laboratory: Models of the Hydrogen Atom

    ERIC Educational Resources Information Center

    Clark, Ted M.; Chamberlain, Julia M.

    2014-01-01

    An activity supporting the PhET interactive simulation, Models of the Hydrogen Atom, has been designed and used in the laboratory portion of a general chemistry course. This article describes the framework used to successfully accomplish implementation on a large scale. The activity guides students through a comparison and analysis of the six…

  13. Water properties from first principles: Simulations by a general-purpose quantum mechanical polarizable force field

    PubMed Central

    Donchev, A. G.; Galkin, N. G.; Illarionov, A. A.; Khoruzhii, O. V.; Olevanov, M. A.; Ozrin, V. D.; Subbotin, M. V.; Tarasov, V. I.

    2006-01-01

    We have recently introduced a quantum mechanical polarizable force field (QMPFF) fitted solely to high-level quantum mechanical data for simulations of biomolecular systems. Here, we present an improved form of the force field, QMPFF2, and apply it to simulations of liquid water. The results of the simulations show excellent agreement with a variety of experimental thermodynamic and structural data, as good or better than that provided by specialized water potentials. In particular, QMPFF2 is the only ab initio force field to accurately reproduce the anomalous temperature dependence of water density to our knowledge. The ability of the same force field to successfully simulate the properties of both organic molecules and water suggests it will be useful for simulations of proteins and protein–ligand interactions in the aqueous environment. PMID:16723394

  14. Axisymmetric general relativistic simulations of the accretion-induced collapse of white dwarfs

    SciTech Connect

    Abdikamalov, E. B.; Ott, C. D.; Rezzolla, L.; Dessart, L.; Dimmelmeier, H.; Marek, A.; Janka, H.-T.

    2010-02-15

    The accretion-induced collapse (AIC) of a white dwarf may lead to the formation of a protoneutron star and a collapse-driven supernova explosion. This process represents a path alternative to thermonuclear disruption of accreting white dwarfs in type Ia supernovae. In the AIC scenario, the supernova explosion energy is expected to be small and the resulting transient short-lived, making it hard to detect by electromagnetic means alone. Neutrino and gravitational-wave (GW) observations may provide crucial information necessary to reveal a potential AIC. Motivated by the need for systematic predictions of the GW signature of AIC, we present results from an extensive set of general-relativistic AIC simulations using a microphysical finite-temperature equation of state and an approximate treatment of deleptonization during collapse. Investigating a set of 114 progenitor models in axisymmetric rotational equilibrium, with a wide range of rotational configurations, temperatures and central densities, and resulting white dwarf masses, we extend previous Newtonian studies and find that the GW signal has a generic shape akin to what is known as a 'type III' signal in the literature. Despite this reduction to a single type of waveform, we show that the emitted GWs carry information that can be used to constrain the progenitor and the postbounce rotation. We discuss the detectability of the emitted GWs, showing that the signal-to-noise ratio for current or next-generation interferometer detectors could be high enough to detect such events in our Galaxy. Furthermore, we contrast the GW signals of AIC and rotating massive star iron core collapse and find that they can be distinguished, but only if the distance to the source is known and a detailed reconstruction of the GW time series from detector data is possible. Some of our AIC models form massive quasi-Keplerian accretion disks after bounce. The disk mass is very sensitive to progenitor mass and angular momentum

  15. A general spectral method for the numerical simulation of one-dimensional interacting fermions

    NASA Astrophysics Data System (ADS)

    Clason, Christian; von Winckel, Gregory

    2012-02-01

    This work introduces a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient MATLAB program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. Program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 102 No. of bytes in distributed program, including test data, etc.: 2294 Distribution format: tar.gz Programming language: MATLAB Computer: Any architecture supported by MATLAB Operating system: Any supported by MATLAB; tested under Linux (x86-64) and Mac OS X (10.6) RAM: Depends on the data Classification: 4.3, 2.2 Nature of problem: The direct numerical solution of the multi-particle one-dimensional Schrödinger equation in a quantum well is challenging due to the exponential growth in the number of degrees of freedom with increasing particles. Solution method: A nodal spectral Galerkin scheme is used where the basis functions are constructed to obey the antisymmetry relations of the fermionic wave

  16. General relativistic corrections to N -body simulations and the Zel'dovich approximation

    NASA Astrophysics Data System (ADS)

    Fidler, Christian; Rampf, Cornelius; Tram, Thomas; Crittenden, Robert; Koyama, Kazuya; Wands, David

    2015-12-01

    The initial conditions for Newtonian N -body simulations are usually generated by applying the Zel'dovich approximation to the initial displacements of the particles using an initial power spectrum of density fluctuations generated by an Einstein-Boltzmann solver. We show that in most gauges the initial displacements generated in this way receive a first-order relativistic correction. We define a new gauge, the N -body gauge, in which this relativistic correction vanishes and show that a conventional Newtonian N -body simulation includes all first-order relativistic contributions (in the absence of radiation) if we identify the coordinates in Newtonian simulations with those in the relativistic N -body gauge.

  17. General order parameter based correlation analysis of protein backbone motions between experimental NMR relaxation measurements and molecular dynamics simulations.

    PubMed

    Liu, Qing; Shi, Chaowei; Yu, Lu; Zhang, Longhua; Xiong, Ying; Tian, Changlin

    2015-02-13

    Internal backbone dynamic motions are essential for different protein functions and occur on a wide range of time scales, from femtoseconds to seconds. Molecular dynamic (MD) simulations and nuclear magnetic resonance (NMR) spin relaxation measurements are valuable tools to gain access to fast (nanosecond) internal motions. However, there exist few reports on correlation analysis between MD and NMR relaxation data. Here, backbone relaxation measurements of (15)N-labeled SH3 (Src homology 3) domain proteins in aqueous buffer were used to generate general order parameters (S(2)) using a model-free approach. Simultaneously, 80 ns MD simulations of SH3 domain proteins in a defined hydrated box at neutral pH were conducted and the general order parameters (S(2)) were derived from the MD trajectory. Correlation analysis using the Gromos force field indicated that S(2) values from NMR relaxation measurements and MD simulations were significantly different. MD simulations were performed on models with different charge states for three histidine residues, and with different water models, which were SPC (simple point charge) water model and SPC/E (extended simple point charge) water model. S(2) parameters from MD simulations with charges for all three histidines and with the SPC/E water model correlated well with S(2) calculated from the experimental NMR relaxation measurements, in a site-specific manner.

  18. A general kinetic-flow coupling model for FCC riser flow simulation.

    SciTech Connect

    Chang, S. L.

    1998-05-18

    A computational fluid dynamic (CFD) code has been developed for fluid catalytic cracking (FCC) riser flow simulation. Depending on the application of interest, a specific kinetic model is needed for the FCC flow simulation. This paper describes a method to determine a kinetic model based on limited pilot-scale test data. The kinetic model can then be used with the CFD code as a tool to investigate optimum operating condition ranges for a specific FCC unit.

  19. A Variable Resolution Stretched Grid General Circulation Model: Regional Climate Simulation

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.; Suarez, Max J.

    2000-01-01

    The development of and results obtained with a variable resolution stretched-grid GCM for the regional climate simulation mode, are presented. A global variable resolution stretched- grid used in the study has enhanced horizontal resolution over the U.S. as the area of interest The stretched-grid approach is an ideal tool for representing regional to global scale interaction& It is an alternative to the widely used nested grid approach introduced over a decade ago as a pioneering step in regional climate modeling. The major results of the study are presented for the successful stretched-grid GCM simulation of the anomalous climate event of the 1988 U.S. summer drought- The straightforward (with no updates) two month simulation is performed with 60 km regional resolution- The major drought fields, patterns and characteristics such as the time averaged 500 hPa heights precipitation and the low level jet over the drought area. appear to be close to the verifying analyses for the stretched-grid simulation- In other words, the stretched-grid GCM provides an efficient down-scaling over the area of interest with enhanced horizontal resolution. It is also shown that the GCM skill is sustained throughout the simulation extended to one year. The developed and tested in a simulation mode stretched-grid GCM is a viable tool for regional and subregional climate studies and applications.

  20. A general parallelization strategy for random path based geostatistical simulation methods

    NASA Astrophysics Data System (ADS)

    Mariethoz, Grégoire

    2010-07-01

    The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.

  1. Simulated scaling method for localized enhanced sampling and simultaneous "alchemical" free energy simulations: a general method for molecular mechanical, quantum mechanical, and quantum mechanical/molecular mechanical simulations.

    PubMed

    Li, Hongzhi; Fajer, Mikolai; Yang, Wei

    2007-01-14

    A potential scaling version of simulated tempering is presented to efficiently sample configuration space in a localized region. The present "simulated scaling" method is developed with a Wang-Landau type of updating scheme in order to quickly flatten the distributions in the scaling parameter lambdam space. This proposal is meaningful for a broad range of biophysical problems, in which localized sampling is required. Besides its superior capability and robustness in localized conformational sampling, this simulated scaling method can also naturally lead to efficient "alchemical" free energy predictions when dual-topology alchemical hybrid potential is applied; thereby simultaneously, both of the chemically and conformationally distinct portions of two end point chemical states can be efficiently sampled. As demonstrated in this work, the present method is also feasible for the quantum mechanical and quantum mechanical/molecular mechanical simulations.

  2. Maternally Derived Immunity Extends Swine Influenza A Virus Persistence within Farrow-to-Finish Pig Farms: Insights from a Stochastic Event-Driven Metapopulation Model

    PubMed Central

    Cador, Charlie; Rose, Nicolas; Willem, Lander; Andraud, Mathieu

    2016-01-01

    Swine Influenza A Viruses (swIAVs) have been shown to persist in farrow-to-finish pig herds with repeated outbreaks in successive batches, increasing the risk for respiratory disorders in affected animals and being a threat for public health. Although the general routes of swIAV transmission (i.e. direct contact and exposure to aerosols) were clearly identified, the transmission process between batches is still not fully understood. Maternally derived antibodies (MDAs) were stressed as a possible factor favoring within-herd swIAV persistence. However, the relationship between MDAs and the global spread among the different subpopulations in the herds is still lacking. The aim of this study was therefore to understand the mechanisms induced by MDAs in relation with swIAV spread and persistence in farrow-to-finish pig herds. A metapopulation model has been developed representing the population dynamics considering two subpopulations—breeding sows and growing pigs—managed according to batch-rearing system. This model was coupled with a swIAV-specific epidemiological model, accounting for partial passive immunity protection in neonatal piglets and an immunity boost in re-infected animals. Airborne transmission was included by a between-room transmission rate related to the current prevalence of shedding pigs. Maternally derived partial immunity in piglets was found to extend the duration of the epidemics within their batch, allowing for efficient between-batch transmission and resulting in longer swIAV persistence at the herd level. These results should be taken into account in the design of control programmes for the spread and persistence of swIAV in swine herds. PMID:27662592

  3. A general spectral method for the numerical simulation of one-dimensional interacting fermions

    NASA Astrophysics Data System (ADS)

    Clason, Christian; von Winckel, Gregory

    2012-08-01

    This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical

  4. A virtual reality endoscopic simulator augments general surgery resident cancer education as measured by performance improvement.

    PubMed

    White, Ian; Buchberg, Brian; Tsikitis, V Liana; Herzig, Daniel O; Vetto, John T; Lu, Kim C

    2014-06-01

    Colorectal cancer is the second most common cause of death in the USA. The need for screening colonoscopies, and thus adequately trained endoscopists, particularly in rural areas, is on the rise. Recent increases in required endoscopic cases for surgical resident graduation by the Surgery Residency Review Committee (RRC) further emphasize the need for more effective endoscopic training during residency to determine if a virtual reality colonoscopy simulator enhances surgical resident endoscopic education by detecting improvement in colonoscopy skills before and after 6 weeks of formal clinical endoscopic training. We conducted a retrospective review of prospectively collected surgery resident data on an endoscopy simulator. Residents performed four different clinical scenarios on the endoscopic simulator before and after a 6-week endoscopic training course. Data were collected over a 5-year period from 94 different residents performing a total of 795 colonoscopic simulation scenarios. Main outcome measures included time to cecal intubation, "red out" time, and severity of simulated patient discomfort (mild, moderate, severe, extreme) during colonoscopy scenarios. Average time to intubation of the cecum was 6.8 min for those residents who had not undergone endoscopic training versus 4.4 min for those who had undergone endoscopic training (p < 0.001). Residents who could be compared against themselves (pre vs. post-training), cecal intubation times decreased from 7.1 to 4.3 min (p < 0.001). Post-endoscopy rotation residents caused less severe discomfort during simulated colonoscopy than pre-endoscopy rotation residents (4 vs. 10%; p = 0.004). Virtual reality endoscopic simulation is an effective tool for both augmenting surgical resident endoscopy cancer education and measuring improvement in resident performance after formal clinical endoscopic training.

  5. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    PubMed

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  6. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    PubMed Central

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  7. Ensemble climate simulations using a fully coupled ocean-troposphere-stratosphere general circulation model.

    PubMed

    Huebener, H; Cubasch, U; Langematz, U; Spangehl, T; Niehörster, F; Fast, I; Kunze, M

    2007-08-15

    Long-term transient simulations are carried out in an initial condition ensemble mode using a global coupled climate model which includes comprehensive ocean and stratosphere components. This model, which is run for the years 1860-2100, allows the investigation of the troposphere-stratosphere interactions and the importance of representing the middle atmosphere in climate-change simulations. The model simulates the present-day climate (1961-2000) realistically in the troposphere, stratosphere and ocean. The enhanced stratospheric resolution leads to the simulation of sudden stratospheric warmings; however, their frequency is underestimated by a factor of 2 with respect to observations.In projections of the future climate using the Intergovernmental Panel on Climate Change special report on emissions scenarios A2, an increased tropospheric wave forcing counteracts the radiative cooling in the middle atmosphere caused by the enhanced greenhouse gas concentration. This leads to a more dynamically active, warmer stratosphere compared with present-day simulations, and to the doubling of the number of stratospheric warmings. The associated changes in the mean zonal wind patterns lead to a southward displacement of the Northern Hemisphere storm track in the climate-change signal.

  8. Simulations of Madden-Julian Oscillation in High Resolution Atmospheric General Circulation Model

    NASA Astrophysics Data System (ADS)

    Deng, Liping; Stenchikov, Georgiy; McCabe, Matthew; Bangalath, HamzaKunhu; Raj, Jerry; Osipov, Sergey

    2014-05-01

    The simulation of tropical signals, especially the Madden-Julian Oscillation (MJO), is one of the major deficiencies in current numerical models. The unrealistic features in the MJO simulations include the weak amplitude, more power at higher frequencies, displacement of the temporal and spatial distributions, eastward propagation speed being too fast, and a lack of coherent structure for the eastward propagation from the Indian Ocean to the Pacific (e.g., Slingo et al. 1996). While some improvement in simulating MJO variance and coherent eastward propagation has been attributed to model physics, model mean background state and air-sea interaction, studies have shown that the model resolution, especially for higher horizontal resolution, may play an important role in producing a more realistic simulation of MJO (e.g., Sperber et al. 2005). In this study, we employ unique high-resolution (25-km) simulations conducted using the Geophysical Fluid Dynamics Laboratory global High Resolution Atmospheric Model (HIRAM) to evaluate the MJO simulation against the European Center for Medium-range Weather Forecasts (ECMWF) Interim re-analysis (ERAI) dataset. We specifically focus on the ability of the model to represent the MJO related amplitude, spatial distribution, eastward propagation, and horizontal and vertical structures. Additionally, as the HIRAM output covers not only an historic period (1979-2012) but also future period (2012-2050), the impact of future climate change related to the MJO is illustrated. The possible changes in intensity and frequency of extreme weather and climate events (e.g., strong wind and heavy rainfall) in the western Pacific, the Indian Ocean and the Middle East North Africa (MENA) region are highlighted.

  9. An investigation on the body force modeling in a lattice Boltzmann BGK simulation of generalized Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Farnoush, Somayeh; Manzari, Mehrdad T.

    2014-12-01

    Body force modeling is studied in the Generalized Newtonian (GN) fluid flow simulation using a single relaxation time lattice Boltzmann (LB) method. First, in a shear thickening Poiseuille flow, the necessity for studying body force modeling in the LB method is explained. Then, a parametric unified framework is constructed for the first time which is composed of a parametric LB model and its associated macroscopic dual equations in both steady state and transient simulations. This unified framework is used to compare the macroscopic behavior of different forcing models. Besides, using this unified framework, a new forcing model for steady state simulations is devised. Finally, by solving a number of test cases it is shown that numerical results confirm the theoretical arguments presented in this paper.

  10. A general and predictive model of anisotropic grain boundary energy and morphology for polycrystal-level simulations

    NASA Astrophysics Data System (ADS)

    Runnels, Brandon; Beyerlein, Irene; Conti, Sergio; Ortiz, Michael

    In this work, a new model for anisotropic GB energy and morphology is formulated that is fast, general, dependent on only three material parameters, and is verified by comparison with more than 40 MD and experimental datasets for (a)symmetric, tilt/twist, FCC/BCC materials, as well as experimental measurements. A relaxation algorithm is presented that is able to efficiently compute the optimal facet pattern and corresponding relaxed energy. Finally, the GB model is implemented as an interface model in a polycrystal simulation to observe the effects of GB in conjunction with elastic and plastic deformation. The simulations are compared with those using an isotropic GB model, and the effect of the GB isotropy on the bulk properties and microstructure is determined. The results have applications towards, e.g., improved polycrystal simulations, understanding void nucleation, and GB engineering.

  11. A study of nucleation and growth of thin films by means of computer simulation: General features

    NASA Technical Reports Server (NTRS)

    Salik, J.

    1984-01-01

    Some of the processes involved in the nucleation and growth of thin films were simulated by means of a digital computer. The simulation results were used to study the nucleation and growth kinetics resulting from the various processes. Kinetic results obtained for impingement, surface migration, impingement combined with surface migration, and with reevaporation are presented. A substantial fraction of the clusters may form directly upon impingement. Surface migration results in a decrease in cluster density, and reevaporation of atoms from the surface causes a further reduction in cluster density.

  12. Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models

    PubMed Central

    Maris, Gunter; Bechger, Timo; Glas, Cees

    2017-01-01

    The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution. PMID:28076429

  13. An exploratory simulation study of a head-up display for general aviation lightplanes

    NASA Technical Reports Server (NTRS)

    Harris, R. L., Sr.; Hewes, D. E.

    1973-01-01

    The concept of a simplified head-up display referred to as a landing-site indicator (LASI) for use in lightplanes is discussed. Results of a fixed-base simulation study exploring the feasibility of the LASI concept are presented in terms of measurements of pilot performance, control-activity parameters, and subjective comments of four test subjects. These subjects, all of whom had various degrees of piloting experience in this type aircraft, performed a series of simulated landings both with and without the LASI starting from different initial conditions in the final approach leg of the landing maneuver.

  14. The global distribution of natural tritium in precipitation simulated with an Atmospheric General Circulation Model and comparison with observations

    NASA Astrophysics Data System (ADS)

    Cauquoin, A.; Jean-Baptiste, P.; Risi, C.; Fourré, É.; Stenni, B.; Landais, A.

    2015-10-01

    The description of the hydrological cycle in Atmospheric General Circulation Models (GCMs) can be validated using water isotopes as tracers. Many GCMs now simulate the movement of the stable isotopes of water, but here we present the first GCM simulations modelling the content of natural tritium in water. These simulations were obtained using a version of the LMDZ General Circulation Model enhanced by water isotopes diagnostics, LMDZ-iso. To avoid tritium generated by nuclear bomb testing, the simulations have been evaluated against a compilation of published tritium datasets dating from before 1950, or measured recently. LMDZ-iso correctly captures the observed tritium enrichment in precipitation as oceanic air moves inland (the so-called continental effect) and the observed north-south variations due to the latitudinal dependency of the cosmogenic tritium production rate. The seasonal variability, linked to the stratospheric intrusions of air masses with higher tritium content into the troposphere, is correctly reproduced for Antarctica with a maximum in winter. LMDZ-iso reproduces the spring maximum of tritium over Europe, but underestimates it and produces a peak in winter that is not apparent in the data. This implementation of tritium in a GCM promises to provide a better constraint on: (1) the intrusions and transport of air masses from the stratosphere, and (2) the dynamics of the modelled water cycle. The method complements the existing approach of using stable water isotopes.

  15. Accurate and general treatment of electrostatic interaction in Hamiltonian adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.

    2016-10-01

    In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.

  16. Generalized methodology for modeling and simulating optical interconnection networks using diffraction analysis

    NASA Astrophysics Data System (ADS)

    Louri, Ahmed; Major, Michael C.

    1995-07-01

    Research in the field of free-space optical interconnection networks has reached a point where simula-tors and other design tools are desirable for reducing development costs and for improving design time. Previously proposed methodologies have only been applicable to simple systems. Our goal was to develop a simulation methodology capable of evaluating the performance characteristics for a variety of different free-space networks under a range of different configurations and operating states. The proposed methodology operates by first establishing the optical signal powers at various locations in the network. These powers are developed through the simulation by diffraction analysis of the light propagation through the network. After this evaluation, characteristics such as bit-error rate, signal-to-noise ratio, and system bandwidth are calculated. Further, the simultaneous evaluation of this process for a set of component misalignments provides a measure of the alignment tolerance of a design. We discuss this simulation process in detail as well as provide models for different optical interconnection network components.

  17. Speed of spinal vs general anaesthesia for category-1 caesarean section: a simulation and clinical observation-based study.

    PubMed

    Kathirgamanathan, A; Douglas, M J; Tyler, J; Saran, S; Gunka, V; Preston, R; Kliffer, P

    2013-07-01

    Controversy exists as to whether effective spinal anaesthesia can be achieved as quickly as general anaesthesia for a category-1 caesarean section. Sixteen consultants and three fellows in obstetric anaesthesia were timed performing spinal and general anaesthesia for category-1 caesarean section on a simulator. The simulation time commenced upon entry of the anaesthetist into the operating theatre and finished for the spinal anaesthetic at the end of intrathecal injection and for the general anaesthetic when the anaesthetist was happy for surgery to start. In the second clinical part of the study, the time from intrathecal administration to 'adequate surgical anaesthesia' (defined as adequate for start of a category-1 caesarean section) was estimated in 100 elective (category-4) caesarean sections. The median (IQR [range]) times (min:s) for spinal procedure, onset of spinal block and general anaesthesia were 2:56 (2:32-3:32 [1:22-3:50]), 5:56 (4:23-7:39 [2:9-13:32]) and 1:56 (1:39-2:9 [1:13-3:12]), respectively. The limiting factor in urgent spinal anaesthesia is the unpredictable time needed for adequate surgical block to develop.

  18. Field-based DGTD/PIC technique for general and stable simulation of interaction between light and electron bunches

    NASA Astrophysics Data System (ADS)

    Fallahi, Arya; Kärtner, Franz

    2014-12-01

    We introduce a hybrid technique based on the discontinuous Galerkin time domain (DGTD) and the particle in cell (PIC) simulation methods for the analysis of interaction between light and charged particles. The DGTD algorithm is a three-dimensional, dual-field and fully explicit method for efficiently solving Maxwell equations in the time domain on unstructured grids. On the other hand, the PIC algorithm is a versatile technique for the simulation of charged particles in an electromagnetic field. This paper introduces a novel strategy for combining both methods to solve for the electron motion and field distribution when an optical beam interacts with an electron bunch in a very general geometry. The developed software offers a complete and stable numerical solution of the problem for arbitrary charge and field distributions in the time domain on unstructured grids. For this purpose, an advanced search algorithm is developed for fast calculation of field data at charge points and for later importing to the PIC simulations. In addition, we propose a field-based coupling between the two methods resulting in a stable and precise time marching scheme for both fields and charged particle motion. To benchmark the solver, some examples are numerically solved and compared with analytical solutions. Eventually, the developed software is utilized to simulate the field emission from a flat metal plate and a silicon nano-tip. In the future, we will use this technique for the simulation and design of ultrafast compact x-ray sources.

  19. Spring: a general framework for collaborative, real-time surgical simulation.

    PubMed

    Montgomery, Kevin; Bruyns, Cynthia; Brown, Joel; Sorkin, Stephen; Mazzella, Frederic; Thonier, Guillaume; Tellier, Arnaud; Lerman, Benjamin; Menon, Anil

    2002-01-01

    We describe the implementation details of a real-time surgical simulation system with soft-tissue modeling and multi-user, multi-instrument, networked haptics. The simulator is cross-platform and runs on various Unix and Windows platforms. It is written in C++ with OpenGL for graphics; GLUT, GLUI, and MUI for user interface; and supports parallel processing. It allows for the relatively easy introduction of patient-specific anatomy and supports many common file formats. It performs soft-tissue modeling, some limited rigid-body dynamics, and suture modeling. The simulator interfaces to many different interaction devices and provides for multi-user, multi-instrument collaboration over the Internet. Many virtual tools have been created and their interactions with tissue have been implemented. In addition, a number of extra features, such as voice input/output, real-time texture-mapped video input, stereo and head-mounted display support, and replicated display facilities are presented.

  20. General Relativistic Magnetohydrodynamic Simulations of Jet Formation with a Thin Keplerian Disk

    NASA Technical Reports Server (NTRS)

    Mizuno, Yosuke; Nishikawa, Ken-Ichi; Koide, Shinji; Hardee, Philip; Gerald, J. Fishman

    2006-01-01

    We have performed several simulations of black hole systems (non-rotating, black hole spin parameter a = 0.0 and rapidly rotating, a = 0.95) with a geometrically thin Keplerian disk using the newly developed RAISHIN code. The simulation results show the formation of jets driven by the Lorentz force and the gas pressure gradient. The jets have mildly relativistic speed (greater than or equal to 0.4 c). The matter is continuously supplied from the accretion disk and the jet propagates outward until each applicable terminal simulation time (non-rotating: t/tau S = 275 and rotating: t/tau S = 200, tau s equivalent to r(sub s/c). It appears that a rotating black hole creates an additional, faster, and more collimated inner outflow (greater than or equal to 0.5 c) formed and accelerated by the twisted magnetic field resulting from frame-dragging in the black hole ergosphere. This new result indicates that jet kinematic structure depends on black hole rotation.

  1. Developing extensible lattice-Boltzmann simulators for general-purpose graphics-processing units

    SciTech Connect

    Walsh, S C; Saar, M O

    2011-12-21

    Lattice-Boltzmann methods are versatile numerical modeling techniques capable of reproducing a wide variety of fluid-mechanical behavior. These methods are well suited to parallel implementation, particularly on the single-instruction multiple data (SIMD) parallel processing environments found in computer graphics processing units (GPUs). Although more recent programming tools dramatically improve the ease with which GPU programs can be written, the programming environment still lacks the flexibility available to more traditional CPU programs. In particular, it may be difficult to develop modular and extensible programs that require variable on-device functionality with current GPU architectures. This paper describes a process of automatic code generation that overcomes these difficulties for lattice-Boltzmann simulations. It details the development of GPU-based modules for an extensible lattice-Boltzmann simulation package - LBHydra. The performance of the automatically generated code is compared to equivalent purpose written codes for both single-phase, multiple-phase, and multiple-component flows. The flexibility of the new method is demonstrated by simulating a rising, dissolving droplet in a porous medium with user generated lattice-Boltzmann models and subroutines.

  2. Generalized Simulation Model for a Switched-Mode Power Supply Design Course Using MATLAB/SIMULINK

    ERIC Educational Resources Information Center

    Liao, Wei-Hsin; Wang, Shun-Chung; Liu, Yi-Hua

    2012-01-01

    Switched-mode power supplies (SMPS) are becoming an essential part of many electronic systems as the industry drives toward miniaturization and energy efficiency. However, practical SMPS design courses are seldom offered. In this paper, a generalized MATLAB/SIMULINK modeling technique is first presented. A proposed practical SMPS design course at…

  3. Simulation of the Low-Level-Jet by general circulation models

    SciTech Connect

    Ghan, S.J.

    1996-04-01

    To what degree is the low-level jet climatology and it`s impact on clouds and precipitation being captured by current general circulation models? It is hypothesised that a need for a pramaterization exists. This paper describes this parameterization need.

  4. The atmospheric chemistry general circulation model ECHAM5/MESSy1: consistent simulation of ozone from the surface to the mesosphere

    NASA Astrophysics Data System (ADS)

    Jöckel, P.; Tost, H.; Pozzer, A.; Brühl, C.; Buchholz, J.; Ganzeveld, L.; Hoor, P.; Kerkweg, A.; Lawrence, M. G.; Sander, R.; Steil, B.; Stiller, G.; Tanarhte, M.; Taraborrelli, D.; van Aardenne, J.; Lelieveld, J.

    2006-11-01

    The new Modular Earth Submodel System (MESSy) describes atmospheric chemistry and meteorological processes in a modular framework, following strict coding standards. It has been coupled to the ECHAM5 general circulation model, which has been slightly modified for this purpose. A 90-layer model setup up to 0.01 hPa was used at spectral T42 resolution to simulate the lower and middle atmosphere. With the high vertical resolution the model simulates the Quasi-Biennial Oscillation. The model meteorology has been tested to check the influence of the changes to ECHAM5 and the radiation interactions with the new representation of atmospheric composition. In the simulations presented here a Newtonian relaxation technique was applied in the tropospheric part of the domain to weakly nudge the model towards the analysed meteorology during the period 1998-2005. This allows an efficient and direct evaluation with satellite and in-situ data. It is shown that the tropospheric wave forcing of the stratosphere in the model suffices to reproduce major stratospheric warming events leading e.g. to the vortex split over Antarctica in 2002. Characteristic features such as dehydration and denitrification caused by the sedimentation of polar stratospheric cloud particles and ozone depletion during winter and spring are simulated well, although ozone loss in the lower polar stratosphere is slightly underestimated. The model realistically simulates stratosphere-troposphere exchange processes as indicated by comparisons with satellite and in situ measurements. The evaluation of tropospheric chemistry presented here focuses on the distributions of ozone, hydroxyl radicals, carbon monoxide and reactive nitrogen compounds. In spite of minor shortcomings, mostly related to the relatively coarse T42 resolution and the neglect of inter-annual changes in biomass burning emissions, the main characteristics of the trace gas distributions are generally reproduced well. The MESSy submodels and the

  5. Thermal conductance of carbon nanotube contacts: Molecular dynamics simulations and general description of the contact conductance

    NASA Astrophysics Data System (ADS)

    Salaway, Richard N.; Zhigilei, Leonid V.

    2016-07-01

    The contact conductance of carbon nanotube (CNT) junctions is the key factor that controls the collective heat transfer through CNT networks or CNT-based materials. An improved understanding of the dependence of the intertube conductance on the contact structure and local environment is needed for predictive computational modeling or theoretical description of the effective thermal conductivity of CNT materials. To investigate the effect of local structure on the thermal conductance across CNT-CNT contact regions, nonequilibrium molecular dynamics (MD) simulations are performed for different intertube contact configurations (parallel fully or partially overlapping CNTs and CNTs crossing each other at different angles) and local structural environments characteristic of CNT network materials. The results of MD simulations predict a stronger CNT length dependence present over a broader range of lengths than has been previously reported and suggest that the effect of neighboring junctions on the conductance of CNT-CNT junctions is weak and only present when the CNTs that make up the junctions are within the range of direct van der Waals interaction with each other. A detailed analysis of the results obtained for a diverse range of intertube contact configurations reveals a nonlinear dependence of the conductance on the contact area (or number of interatomic intertube interactions) and suggests larger contributions to the conductance from areas of the contact where the density of interatomic intertube interactions is smaller. An empirical relation accounting for these observations and expressing the conductance of an arbitrary contact configuration through the total number of interatomic intertube interactions and the average number of interatomic intertube interactions per atom in the contact region is proposed. The empirical relation is found to provide a good quantitative description of the contact conductance for various CNT configurations investigated in the MD

  6. A general aviation simulator evaluation of a rate-enhanced instrument landing system display

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.

    1981-01-01

    A piloted-simulation study was conducted to evaluate the effect on instrument landing system tracking performance of integrating localizer-error rate with raw localizer and glide-slope error. The display was named the pseudocommand tracking indicator (PCTI) because it provides an indication of the change of heading required to track the localizer center line. Eight instrument-rated pilots each flew five instrument approaches with the PCTI and five instrument approaches with a conventional course deviation indicator. The results show good overall pilot acceptance of the display, a significant improvement in localizer tracking error, and no significant changes in glide-slope tracking error or pilot workload.

  7. A general approach to develop reduced order models for simulation of solid oxide fuel cell stacks

    SciTech Connect

    Pan, Wenxiao; Bao, Jie; Lo, Chaomei; Lai, Canhai; Agarwal, Khushbu; Koeppel, Brian J.; Khaleel, Mohammad A.

    2013-06-15

    A reduced order modeling approach based on response surface techniques was developed for solid oxide fuel cell stacks. This approach creates a numerical model that can quickly compute desired performance variables of interest for a stack based on its input parameter set. The approach carefully samples the multidimensional design space based on the input parameter ranges, evaluates a detailed stack model at each of the sampled points, and performs regression for selected performance variables of interest to determine the responsive surfaces. After error analysis to ensure that sufficient accuracy is established for the response surfaces, they are then implemented in a calculator module for system-level studies. The benefit of this modeling approach is that it is sufficiently fast for integration with system modeling software and simulation of fuel cell-based power systems while still providing high fidelity information about the internal distributions of key variables. This paper describes the sampling, regression, sensitivity, error, and principal component analyses to identify the applicable methods for simulating a planar fuel cell stack.

  8. SciDAC - Center for Simulation of Wave Interactions with MHD -- General Atomics Support of ORNL Collaboration

    SciTech Connect

    Abla, G

    2012-11-09

    The Center for Simulation of Wave Interactions with Magnetohydrodynamics (SWIM) project is dedicated to conduct research on integrated multi-physics simulations. The Integrated Plasma Simulator (IPS) is a framework that was created by the SWIM team. It provides an integration infrastructure for loosely coupled component-based simulations by facilitating services for code execution coordination, computational resource management, data management, and inter-component communication. The IPS framework features improving resource utilization, implementing application-level fault tolerance, and support of the concurrent multi-tasking execution model. The General Atomics (GA) team worked closely with other team members on this contract, and conducted research in the areas of computational code monitoring, meta-data management, interactive visualization, and user interfaces. The original website to monitor SWIM activity was developed in the beginning of the project. Due to the amended requirements, the software was redesigned and a revision of the website was deployed into production in April of 2010. Throughout the duration of this project, the SWIM Monitoring Portal (http://swim.gat.com:8080/) has been a critical production tool for supporting the project's physics goals.

  9. TOUGH2: A general-purpose numerical simulator for multiphase fluid and heat flow

    SciTech Connect

    Pruess, K.

    1991-05-01

    TOUGH2 is a numerical simulation program for nonisothermal flows of multicomponent, multiphase fluids in porous and fractured media. The chief applications for which TOUGH2 is designed are in geothermal reservoir engineering, nuclear waste disposal, and unsaturated zone hydrology. A successor to the TOUGH program, TOUGH2 offers added capabilities and user features, including the flexibility to handle different fluid mixtures, facilities for processing of geometric data (computational grids), and an internal version control system to ensure referenceability of code applications. This report includes a detailed description of governing equations, program architecture, and user features. Enhancements in data inputs relative to TOUGH are described, and a number of sample problems are given to illustrate code applications. 46 refs., 29 figs., 12 tabs.

  10. Interannual tropical rainfall variability in general circulation model simulations associated with the atmospheric model intercomparison project

    SciTech Connect

    Sperber, K.R.; Palmer, T.N.

    1996-11-01

    The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979 - 88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations. A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany /National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall variability was also best reproduced. However, for all regions the skill was less than that of the ECMWF model. The relationships of the all-India and Sahel rainfall/SST teleconnections with horizontal resolution, convection scheme closure, and numerics have been evaluated. 64 refs., 13 figs., 3 tabs.

  11. General Relativistic Hydrodynamic Simulation of Accretion Flow from a Stellar Tidal Disruption

    NASA Astrophysics Data System (ADS)

    Shiokawa, Hotaka; Krolik, Julian H.; Cheng, Roseanne M.; Piran, Tsvi; Noble, Scott C.

    2015-05-01

    We study how the matter dispersed when a supermassive black hole tidally disrupts a star joins an accretion flow. Combining a relativistic hydrodynamic simulation of the stellar disruption with a relativistic hydrodynamics simulation of the subsequent debris motion, we track the evolution of such a system until ≃ 80% of the stellar mass bound to the black hole has settled into an accretion flow. Shocks near the stellar pericenter and also near the apocenter of the most tightly bound debris dissipate orbital energy, but only enough to make its characteristic radius comparable to the semimajor axis of the most bound material, not the tidal radius as previously envisioned. The outer shocks are caused by post-Newtonian relativistic effects, both on the stellar orbit during its disruption and on the tidal forces. Accumulation of mass into the accretion flow is both non-monotonic and slow, requiring several to 10 times the orbital period of the most tightly bound tidal streams, while the inflow time for most of the mass may be comparable to or longer than the mass accumulation time. Deflection by shocks does, however, cause some mass to lose both angular momentum and energy, permitting it to move inward even before most of the mass is accumulated into the accretion flow. Although the accretion rate still rises sharply and then decays roughly as a power law, its maximum is ≃ 0.1× the previous expectation, and the timescale of the peak is ≃ 5× longer than previously predicted. The geometric mean of the black hole mass and stellar mass inferred from a measured event timescale is therefore ≃ 0.2× the value given by classical theory.

  12. CO adsorption over Pd nanoparticles: A general framework for IR simulations on nanoparticles

    NASA Astrophysics Data System (ADS)

    Zeinalipour-Yazdi, Constantinos D.; Willock, David J.; Thomas, Liam; Wilson, Karen; Lee, Adam F.

    2016-04-01

    CO vibrational spectra over catalytic nanoparticles under high coverages/pressures are discussed from a DFT perspective. Hybrid B3LYP and PBE DFT calculations of CO chemisorbed over Pd4 and Pd13 nanoclusters, and a 1.1 nm Pd38 nanoparticle, have been performed in order to simulate the corresponding coverage dependent infrared (IR) absorption spectra, and hence provide a quantitative foundation for the interpretation of experimental IR spectra of CO over Pd nanocatalysts. B3LYP simulated IR intensities are used to quantify site occupation numbers through comparison with experimental DRIFTS spectra, allowing an atomistic model of CO surface coverage to be created. DFT adsorption energetics for low CO coverage (θ → 0) suggest the CO binding strength follows the order hollow > bridge > linear, even for dispersion-corrected functionals for sub-nanometre Pd nanoclusters. For a Pd38 nanoparticle, hollow and bridge-bound are energetically similar (hollow ≈ bridge > atop). It is well known that this ordering has not been found at the high coverages used experimentally, wherein atop CO has a much higher population than observed over Pd(111), confirmed by our DRIFTS spectra for Pd nanoparticles supported on a KIT-6 silica, and hence site populations were calculated through a comparison of DFT and spectroscopic data. At high CO coverage (θ = 1), all three adsorbed CO species co-exist on Pd38, and their interdiffusion is thermally feasible at STP. Under such high surface coverages, DFT predicts that bridge-bound CO chains are thermodynamically stable and isoenergetic to an entirely hollow bound Pd/CO system. The Pd38 nanoparticle undergoes a linear (3.5%), isotropic expansion with increasing CO coverage, accompanied by 63 and 30 cm- 1 blue-shifts of hollow and linear bound CO respectively.

  13. Simulating coarse-scale vegetation dynamics using the Columbia River Basin succession model-crbsum. Forest Service general technical report

    SciTech Connect

    Keane, R.E.; Long, D.G.; Menakis, J.P.; Hann, W.J.; Bevins, C.D.

    1996-10-01

    The paper details the landscape succession model developed for the coarse-scale assessment called CRBSUM (Columbia River Basin SUccession Model) and presents some general results of the application of this model to the entire basin. CRBSUM was used to predict future landscape characteristics to evaluate management alternatives for both mid-and coarse-scale efforts. A test and sensitivity analysis of CRBSUM is also presented. This paper was written as a users guide for those who wish to run the model and interprete results, and its was also written as documentation for some results of the Interior Columbia River Basin simulation effort.

  14. Finite element for rotor/stator interactive forces in general engine dynamic simulation. Part 1: Development of bearing damper element

    NASA Technical Reports Server (NTRS)

    Adams, M. L.; Padovan, J.; Fertis, D. G.

    1980-01-01

    A general purpose squeeze-film damper interactive force element was developed, coded into a software package (module) and debugged. This software package was applied to nonliner dynamic analyses of some simple rotor systems. Results for pressure distributions show that the long bearing (end sealed) is a stronger bearing as compared to the short bearing as expected. Results of the nonlinear dynamic analysis, using a four degree of freedom simulation model, showed that the orbit of the rotating shaft increases nonlinearity to fill the bearing clearance as the unbalanced weight increases.

  15. The r-process in black hole-neutron star mergers based on a fully general-relativistic simulation

    NASA Astrophysics Data System (ADS)

    Nishimura, N.; Wanajo, S.; Sekiguchi, Y.; Kiuchi, K.; Kyutoku, K.; Shibata, M.

    2016-01-01

    We investigate the black hole-neutron star binary merger in the contest of the r-process nucleosynthesis. Employing a hydrodynamical model simulated in the framework of full general relativity, we perform nuclear reaction network calculations. The extremely neutron-rich matter with the total mass 0.01 M⊙ is ejected, in which a strong r-process with fission cycling proceeds due to the high neutron number density. We discuss relevant astrophysical issues such as the origin of r-process elements as well as the r-process powered electromagnetic transients.

  16. Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-11-01

    This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.

  17. Generalized Eddington analytical model for azimuthally dependent radiance simulation in stratified media.

    PubMed

    Marzano, Frank S; Ferrauto, Giancarlo

    2005-10-01

    A fast analytical radiative transfer model to account for propagation of unpolarized monochromatic radiation in random media with a plane-parallel geometry is presented. The model employs an Eddington-like approach combined with the delta phase-function transformation technique. The Eddington approximation is extended in a form that allows us to unfold the azimuthal dependence of the radiance field. A first-order scattering correction to the azimuth-dependent Eddington radiative model solution is also performed to improve the model accuracy for low-scattering media and flexibility with respect to use of explicit arbitrary phase functions. The first-order scattering-corrected solution, called the generalized Eddington radiative model (GERM), is systematically tested against a numerical multistream discrete ordinate model for backscattered radiance at the top of the medium. The typical mean accuracy of the GERM solution is generally better than 10% with a standard deviation of 20% for radiance calculations over a wide range of independent input optical parameters and observation angles. GERM errors are shown to be comparable with the errors due to an input parameter uncertainty of precise numerical models. The proposed model can be applied in a quite arbitrary random medium, and the results are appealing in all cases where speed, accuracy, and/or closed-form solutions are requested. Its potentials, limitations, and further extensions are discussed.

  18. Generalized fictitious methods for fluid-structure interactions: Analysis and simulations

    NASA Astrophysics Data System (ADS)

    Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em

    2013-07-01

    We present a new fictitious pressure method for fluid-structure interaction (FSI) problems in incompressible flow by generalizing the fictitious mass and damping methods we published previously in [1]. The fictitious pressure method involves modification of the fluid solver whereas the fictitious mass and damping methods modify the structure solver. We analyze all fictitious methods for simplified problems and obtain explicit expressions for the optimal reduction factor (convergence rate index) at the FSI interface [2]. This analysis also demonstrates an apparent similarity of fictitious methods to the FSI approach based on Robin boundary conditions, which have been found to be very effective in FSI problems. We implement all methods, including the semi-implicit Robin based coupling method, in the context of spectral element discretization, which is more sensitive to temporal instabilities than low-order methods. However, the methods we present here are simple and general, and hence applicable to FSI based on any other spatial discretization. In numerical tests, we verify the selection of optimal values for the fictitious parameters for simplified problems and for vortex-induced vibrations (VIV) even at zero mass ratio ("for-ever-resonance"). We also develop an empirical a posteriori analysis for complex geometries and apply it to 3D patient-specific flexible brain arteries with aneurysms for very large deformations. We demonstrate that the fictitious pressure method enhances stability and convergence, and is comparable or better in most cases to the Robin approach or the other fictitious methods.

  19. Generalized three-dimensional simulation of ferruled coupled-cavity traveling-wave-tube dispersion and impedance characteristics

    NASA Technical Reports Server (NTRS)

    Maruschek, Joseph W.; Kory, Carol L.; Wilson, Jeffrey D.

    1993-01-01

    The frequency-phase dispersion and Pierce on-axis interaction impedance of a ferruled, coupled-cavity, traveling-wave tube (TWT), slow-wave circuit were calculated using the three-dimensional simulation code Micro-SOS. The utilization of the code to reduce costly and time-consuming experimental cold tests is demonstrated by the accuracy achieved in calculating these parameters. A generalized input file was developed so that ferruled coupled-cavity TWT slow-wave circuits of arbitrary dimensions could be easily modeled. The practicality of the generalized input file was tested by applying it to the ferruled coupled-cavity slow-wave circuit of the Hughes Aircraft Company model 961HA TWT and by comparing the results with experimental results.

  20. Generalized Modelling of the Stabilizer Link and Static Simulation Using FEM

    NASA Astrophysics Data System (ADS)

    Cofaru, Nicolae Florin; Roman, Lucian Ion; Oleksik, Valentin; Pascu, Adrian

    2016-12-01

    This paper proposes an organological approach of one of the components of front suspension, namely anti-roll power link. There will be realized a CAD 3D modelling of this power link. 3D modelling is generalized and there were used the powers of Catia V5R20 software. Parameterized approach provides a high flexibility in the design, meaning that dimensional and shape changes of the semi-power link are very easy to perform just by changing some parameters. Several new versions are proposed for the anti-roll power link body. At the end of the work, it is made a static analysis of the semi-power link model used in the suspension of vehicles OPEL ASTRA G, ZAFIRA, MERIVA, and constructive optimization of its body.

  1. 3D Simulations of the Early Mars Climate with a General Circulation Model

    NASA Technical Reports Server (NTRS)

    Forget, F.; Haberle, R. M.; Montmessin, F.; Cha, S.; Marcq, E.; Schaeffer, J.; Wanherdrick, Y.

    2003-01-01

    The environmental conditions that existed on Mars during the Noachian period are subject to debate in the community. In any case, there are compelling evidence that these conditions were different than what they became later in the amazonian and possibly the Hesperian periods. Indeed, most of the old cratered terrains are disected by valley networks (thought to have been carved by flowing liquid water), whereas younger surface are almost devoid of such valleys. In addition, there are evidence that the erosion rate was much higher during the early noachian than later. Flowing water is surprising on early Mars because the solar luminosity was significantly lower than today. Even with the thick atmosphere (up to several bars).To improve our understanding of the early Mars Climate, we have developed a 3D general circulation model similar to the one used on current Earth or Mars to study the details of the climate today. Our first objective is to answer the following questions : how is the Martian climate modified if 1) the surface pressure is increased up to several bars (our baseline: 2 bars) and 2) if the sun luminosity is decreased by 25 account the heat possibly released by impacts during short periods, although it may have played a role .For this purpose, we have coupled the Martian General Circulation model developed at LMD with a sophisticated correlated k distribution model developped at NASA Ames Research Center. It is a narrow band model which computes the radiative transfer at both solar and thermal wavelengths (from 0.3 to 250 microns).

  2. Analog approach to mixed analog-digital circuit simulation

    NASA Astrophysics Data System (ADS)

    Ogrodzki, Jan

    2013-10-01

    Logic simulation of digital circuits is a well explored research area. Most up-to-date CAD tools for digital circuits simulation use an event driven, selective trace algorithm and Hardware Description Languages (HDL), e.g. the VHDL. This techniques enable simulation of mixed circuits, as well, where an analog part is connected to the digital one through D/A and A/D converters. The event-driven mixed simulation applies a unified, digital-circuits dedicated method to both digital and analog subsystems. In recent years HDL techniques have been also applied to mixed domains, as e.g. in the VHDL-AMS. This paper presents an approach dual to the event-driven one, where an analog part together with a digital one and with converters is treated as the analog subsystem and is simulated by means of circuit simulation techniques. In our problem an analog solver used yields some numerical problems caused by nonlinearities of digital elements. Efficient methods for overriding these difficulties have been proposed.

  3. A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    1998-01-01

    An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.

  4. Generalized Metropolis acceptance criterion for hybrid non-equilibrium molecular dynamics—Monte Carlo simulations

    SciTech Connect

    Chen, Yunjie; Roux, Benoît

    2015-01-14

    A family of hybrid simulation methods that combines the advantages of Monte Carlo (MC) with the strengths of classical molecular dynamics (MD) consists in carrying out short non-equilibrium MD (neMD) trajectories to generate new configurations that are subsequently accepted or rejected via an MC process. In the simplest case where a deterministic dynamic propagator is used to generate the neMD trajectories, the familiar Metropolis acceptance criterion based on the change in the total energy ΔE, min[1,  exp( − βΔE)], guarantees that the hybrid algorithm will yield the equilibrium Boltzmann distribution. However, the functional form of the acceptance probability is more complex when the non-equilibrium switching process is generated via a non-deterministic stochastic dissipative propagator coupled to a heat bath. Here, we clarify the conditions under which the Metropolis criterion remains valid to rigorously yield a proper equilibrium Boltzmann distribution within hybrid neMD-MC algorithm.

  5. Simulating water with rigid non-polarizable models: a general perspective.

    PubMed

    Vega, Carlos; Abascal, Jose L F

    2011-11-28

    Over the last forty years many computer simulations of water have been performed using rigid non-polarizable models. Since these models describe water interactions in an approximate way it is evident that they cannot reproduce all of the properties of water. By now many properties for these kinds of models have been determined and it seems useful to compile some of these results and provide a critical view of the successes and failures. In this paper a test is proposed in which 17 properties of water, from the vapour and liquid to the solid phases, are taken into account to evaluate the performance of a water model. A certain number of points between zero (bad agreement) and ten (good agreement) are given for the predictions of each model and property. We applied the test to five rigid non-polarizable models, TIP3P, TIP5P, TIP4P, SPC/E and TIP4P/2005, obtaining an average score of 2.7, 3.7, 4.7, 5.1, and 7.2 respectively. Thus although no model reproduces all properties, some models perform better than others. It is clear that there are limitations for rigid non-polarizable models. Neglecting polarizability prevents an accurate description of virial coefficients, vapour pressures, critical pressure and dielectric constant. Neglecting nuclear quantum effects prevents an accurate description of the structure, the properties of water below 120 K and the heat capacity. It is likely that for rigid non-polarizable models it may not be possible to increase the score in the test proposed here beyond 7.6. To get closer to experiment, incorporating polarization and nuclear quantum effects is absolutely required even though a substantial increase in computer time should be expected. The test proposed here, being quantitative and selecting properties from all phases of water can be useful in the future to identify progress in the modelling of water.

  6. Multiyear Simulations of the Martian Water Cycle with the Ames General Circulation Model

    NASA Technical Reports Server (NTRS)

    Haberle, R. M.; Schaeffer, J. R.; Nelli, S. M.; Murphy, J. R.

    2003-01-01

    Mars atmosphere is carbon dioxide dominated with non-negligible amounts of water vapor and suspended dust particles. The atmospheric dust plays an important role in the heating and cooling of the planet through absorption and emission of radiation. Small dust particles can potentially be carried to great altitudes and affect the temperatures there. Water vapor condensing onto the dust grains can affect the radiative properties of both, as well as their vertical extent. The condensation of water onto a dust grain will change the grain s fall speed and diminish the possibility of dust obtaining high altitudes. In this capacity, water becomes a controlling agent with regard to the vertical distribution of dust. Similarly, the atmosphere s water vapor holding capacity is affected by the amount of dust in the atmosphere. Dust is an excellent green house catalyst; it raises the temperature of the atmosphere, and thus, its water vapor holding capacity. There is, therefore, a potentially significant interplay between the Martian dust and water cycles. Previous research done using global, 3-D computer modeling to better understand the Martian atmosphere treat the dust and the water cycles as two separate and independent processes. The existing Ames numerical model will be employed to simulate the relationship between the Martian dust and water cycles by actually coupling the two cycles. Water will condense onto the dust, allowing the particle's radiative characteristics, fall speeds, and as a result, their vertical distribution to change. Data obtained from the Viking, Mars Pathfinder, and especially the Mars Global Surveyor missions will be used to determine the accuracy of the model results.

  7. El Nino-southern oscillation simulated in an MRI atmosphere-ocean coupled general circulation model

    SciTech Connect

    Nagai, T.; Tokioka, T.; Endoh, M.; Kitamura, Y. )

    1992-11-01

    A coupled atmosphere-ocean general circulation model (GCM) was time integrated for 30 years to study interannual variability in the tropics. The atmospheric component is a global GCM with 5 levels in the vertical and 4[degrees]latitude X 5[degrees] longitude grids in the horizontal including standard physical processes (e.g., interactive clouds). The oceanic component is a GCM for the Pacific with 19 levels in the vertical and 1[degrees]x 2.5[degrees] grids in the horizontal including seasonal varying solar radiation as forcing. The model succeeded in reproducing interannual variations that resemble the El Nino-Southern Oscillation (ENSO) with realistic seasonal variations in the atmospheric and oceanic fields. The model ENSO cycle has a time scale of approximately 5 years and the model El Nino (warm) events are locked roughly in phase to the seasonal cycle. The cold events, however, are less evident in comparison with the El Nino events. The time scale of the model ENSO cycle is determined by propagation time of signals from the central-eastern Pacific to the western Pacific and back to the eastern Pacific. Seasonal timing is also important in the ENSO time scale: wind anomalies in the central-eastern Pacific occur in summer and the atmosphere ocean coupling in the western Pacific operates efficiently in the first half of the year.

  8. Simulating the universe(s) II: phenomenology of cosmic bubble collisions in full general relativity

    SciTech Connect

    Wainwright, Carroll L.; Aguirre, Anthony; Johnson, Matthew C.; Peiris, Hiranya V. E-mail: mjohnson@perimeterinstitute.ca E-mail: h.peiris@ucl.ac.uk

    2014-10-01

    Observing the relics of collisions between bubble universes would provide direct evidence for the existence of an eternally inflating Multiverse; the non-observation of such events can also provide important constraints on inflationary physics. Realizing these prospects requires quantitative predictions for observables from the properties of the possible scalar field Lagrangians underlying eternal inflation. Building on previous work, we establish this connection in detail. We perform a fully relativistic numerical study of the phenomenology of bubble collisions in models with a single scalar field, computing the comoving curvature perturbation produced in a wide variety of models. We also construct a set of analytic predictions, allowing us to identify the phenomenologically relevant properties of the scalar field Lagrangian. The agreement between the analytic predictions and numerics in the relevant regions is excellent, and allows us to generalize our results beyond the models we adopt for the numerical studies. Specifically, the signature is completely determined by the spatial profile of the colliding bubble just before the collision, and the de Sitter invariant distance between the bubble centers. The analytic and numerical results support a power-law fit with an index 1< κ ∼< 2. For collisions between identical bubbles, we establish a lower-bound on the observed amplitude of collisions that is set by the present energy density in curvature.

  9. Simulating the universe(s) II: phenomenology of cosmic bubble collisions in full general relativity

    NASA Astrophysics Data System (ADS)

    Wainwright, Carroll L.; Johnson, Matthew C.; Aguirre, Anthony; Peiris, Hiranya V.

    2014-10-01

    Observing the relics of collisions between bubble universes would provide direct evidence for the existence of an eternally inflating Multiverse; the non-observation of such events can also provide important constraints on inflationary physics. Realizing these prospects requires quantitative predictions for observables from the properties of the possible scalar field Lagrangians underlying eternal inflation. Building on previous work, we establish this connection in detail. We perform a fully relativistic numerical study of the phenomenology of bubble collisions in models with a single scalar field, computing the comoving curvature perturbation produced in a wide variety of models. We also construct a set of analytic predictions, allowing us to identify the phenomenologically relevant properties of the scalar field Lagrangian. The agreement between the analytic predictions and numerics in the relevant regions is excellent, and allows us to generalize our results beyond the models we adopt for the numerical studies. Specifically, the signature is completely determined by the spatial profile of the colliding bubble just before the collision, and the de Sitter invariant distance between the bubble centers. The analytic and numerical results support a power-law fit with an index 1< κ lesssim 2. For collisions between identical bubbles, we establish a lower-bound on the observed amplitude of collisions that is set by the present energy density in curvature.

  10. Radiative, two-temperature simulations of low-luminosity black hole accretion flows in general relativity

    NASA Astrophysics Data System (ADS)

    Sądowski, Aleksander; Wielgus, Maciek; Narayan, Ramesh; Abarca, David; McKinney, Jonathan C.; Chael, Andrew

    2017-04-01

    We present a numerical method that evolves a two-temperature, magnetized, radiative, accretion flow around a black hole, within the framework of general relativistic radiation magnetohydrodynamics. As implemented in the code KORAL, the gas consists of two sub-components - ions and electrons - which share the same dynamics but experience independent, relativistically consistent, thermodynamical evolution. The electrons and ions are heated independently according to a prescription from the literature for magnetohydrodynamical turbulent dissipation. Energy exchange between the particle species via Coulomb collisions is included. In addition, electrons gain and lose energy and momentum by absorbing and emitting synchrotron and bremsstrahlung radiation and through Compton scattering. All evolution equations are handled within a fully covariant framework in the relativistic fixed-metric space-time of the black hole. Numerical results are presented for five models of low-luminosity black hole accretion. In the case of a model with a mass accretion rate dot{M}˜ 4× 10^{-8} dot{M}_Edd, we find that radiation has a negligible effect on either the dynamics or the thermodynamics of the accreting gas. In contrast, a model with a larger dot{M}˜ 4× 10^{-4} dot{M}_Edd behaves very differently. The accreting gas is much cooler and the flow is geometrically less thick, though it is not quite a thin accretion disc.

  11. Simulating the effects of the 1991 Mount Pinatubo volcanic eruption using the ARPEGE atmosphere general circulation model

    NASA Astrophysics Data System (ADS)

    Otterå, Odd Helge

    2008-03-01

    The climate changes that occured following the volcanic eruption of Mount Pinatubo in the Phillippines on 15 June 1991 have been simulated using the ARPEGE atmosphere general circulation model (AGCM). The model was forced by a reconstructed spatial-time distribution of stratospheric aerosols intended for use in long climate simulations. Four statistical ensembles of the AGCM simulations with and without volcanic aerosols over a period of 5 years following the eruption have been made, and the calculated fields have been compared to available observations. The model is able to reproduce some of the observed features after the eruption, such as the winter warming pattern that was observed over the Northern Hemisphere (NH) during the following winters. This pattern was caused by an enhanced Equator-to-pole temperature gradient in the stratosphere that developed due to aerosol heating of the tropics. This in turn led to a strengthening of the polar vortex, which tends to modulate the planetary wave field in such a way that an anomalously positive Arctic Oscillation pattern is produced in the troposphere and at the surface, favouring warm conditions over the NH. During the summer, the model produced a more uniform cooling over the NH.

  12. General-Relativistic Three-Dimensional Multi-group Neutrino Radiation-Hydrodynamics Simulations of Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Roberts, Luke F.; Ott, Christian D.; Haas, Roland; O'Connor, Evan P.; Diener, Peter; Schnetter, Erik

    2016-11-01

    We report on a set of long-term general-relativistic three-dimensional (3D) multi-group (energy-dependent) neutrino radiation-hydrodynamics simulations of core-collapse supernovae. We employ a full 3D two-moment scheme with the local M1 closure, three neutrino species, and 12 energy groups per species. With this, we follow the post-core-bounce evolution of the core of a nonrotating 27 - {M}⊙ progenitor in full unconstrained 3D and in octant symmetry for ≳380 ms. We find the development of an asymmetric runaway explosion in our unconstrained simulation. We test the resolution dependence of our results and, in agreement with previous work, find that low resolution artificially aids explosion and leads to an earlier runaway expansion of the shock. At low resolution, the octant and full 3D dynamics are qualitatively very similar, but at high resolution, only the full 3D simulation exhibits the onset of explosion.

  13. General dynamic properties of Abeta12-36 amyloid peptide involved in Alzheimer's disease from unfolding simulation.

    PubMed

    Suzuki, Shinya; Galzitskaya, Oxana V; Mitomo, Daisuke; Higo, Junichi

    2004-11-01

    To study the folding/unfolding properties of a beta-amyloid peptide Abeta(12-36) of Alzheimer's disease, five molecular dynamics simulations of Abeta(12-36) in explicit water were done at 450 K starting from a structure that is stable in trifluoroethanol/water at room temperature with two alpha-helices. Due to high temperature, the initial helical structure unfolded during the simulation. The observed aspects of the unfolding were as follows. 1) One helix (helix 1) had a longer life than the other (helix 2), which correlates well with the theoretically computed Phi values. 2) Temporal prolongation of helix 1 was found before unfolding. 3) Hydrophobic cores formed frequently with rearrangement of amino-acid residues in the hydrophobic cores. The formation and rearrangement of the hydrophobic cores may be a general aspect of this peptide in the unfolded state, and the structural changes accompanied by the hydrophobic-core rearrangement may lead the peptide to the most stable structure. 4) Concerted motions (collective modes) appeared to unfold helix 1. The collective modes were similar with those observed in another simulation at 300 K. The analysis implies that the conformation moves according to the collective modes when the peptide is in the initial stage of protein unfolding and in the final stage of protein folding.

  14. Comparisons of spectral thermospheric general circulation model simulations and E and F region chemical release wind observations

    NASA Astrophysics Data System (ADS)

    Mikkelsen, I. S.; Larsen, M. F.

    1993-03-01

    High-latitude chemical release wind measurements were carried out in February and March 1978, in March 1985, and in March 1987. In each of the experiments, wind profiles were obtained covering heights in both the E and the F region. Three of the release experiments were carried out on the evening side of the auroral oval and one on the morning side. Two sets of measurements were carried out in disturbed conditions at solar maximum, while the other two were carried out during quiet periods at solar minimum. The spectral thermospheric general circulation model that has been developed at the Danish Meteorological Institute is used to simulate the conditions appropriate to each of the four experiments and detailed comparisons between the model predictions and the measurements are presented. Considering the uncertainties in the various external sources of forcing, such as the plasma convection patterns, the model adequately reproduces the major features of all the wind profiles. However in the E region the relative wind maxima from the model are, in general, above the heights of the observed wind maxima, possibly due to the oversimplified auroral precipitation used in the model, with the electrons being represented by single Maxwellian energy spectra only. The uncoupled neutral and ionized atmospheric compositions used in the model may also explain part of the unrealistic simulated winds. The upward propagating tides are found to modify the E region winds significantly, even under disturbed conditions when the plasma forcing might be expected to dominate the dynamics. In our results the latter is shown by the sensitivity of the simulated flows to the lower boundary condition which is the imposed tidal oscillation structure.

  15. The atmospheric chemistry general circulation model ECHAM5/MESSy1: consistent simulation of ozone from the surface to the mesosphere

    NASA Astrophysics Data System (ADS)

    Jöckel, P.; Tost, H.; Pozzer, A.; Brühl, C.; Buchholz, J.; Ganzeveld, L.; Hoor, P.; Kerkweg, A.; Lawrence, M. G.; Sander, R.; Steil, B.; Stiller, G.; Tanarhte, M.; Taraborrelli, D.; van Aardenne, J.; Lelieveld, J.

    2006-07-01

    The new Modular Earth Submodel System (MESSy) describes atmospheric chemistry and meteorological processes in a modular framework, following strict coding standards. It has been coupled to the ECHAM5 general circulation model, which has been slightly modified for this purpose. A 90-layer model version up to 0.01 hPa was used at T42 resolution (~2.8 latitude and longitude) to simulate the lower and middle atmosphere. The model meteorology has been tested to check the influence of the changes to ECHAM5 and the radiation interactions with the new representation of atmospheric composition. A Newtonian relaxation technique was applied in the tropospheric part of the domain to weakly nudge the model towards the analysed meteorology during the period 1998-2005. It is shown that the tropospheric wave forcing of the stratosphere in the model suffices to reproduce the Quasi-Biennial Oscillation and major stratospheric warming events leading e.g. to the vortex split over Antarctica in 2002. Characteristic features such as dehydration and denitrification caused by the sedimentation of polar stratospheric cloud particles and ozone depletion during winter and spring are simulated accurately, although ozone loss in the lower polar stratosphere is slightly underestimated. The model realistically simulates stratosphere-troposphere exchange processes as indicated by comparisons with satellite and in situ measurements. The evaluation of tropospheric chemistry presented here focuses on the distributions of ozone, hydroxyl radicals, carbon monoxide and reactive nitrogen compounds. In spite of minor shortcomings, mostly related to the relatively coarse T42 resolution and the neglect of interannual changes in biomass burning emissions, the main characteristics of the trace gas distributions are generally reproduced well. The MESSy submodels and the ECHAM5/MESSy1 model output are available through the internet on request.

  16. Examining the accuracy of astrophysical disk simulations with a generalized hydrodynamical test problem [The role of pressure and viscosity in SPH simulations of astrophysical disks

    SciTech Connect

    Raskin, Cody; Owen, J. Michael

    2016-10-24

    Here, we discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.

  17. Examining the accuracy of astrophysical disk simulations with a generalized hydrodynamical test problem [The role of pressure and viscosity in SPH simulations of astrophysical disks

    DOE PAGES

    Raskin, Cody; Owen, J. Michael

    2016-10-24

    Here, we discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extensionmore » of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less

  18. Outflow Channels and Martian Climate: General Circulation Model (GCM) Simulations with Emplaced Water and Cloud Physics

    NASA Astrophysics Data System (ADS)

    Santiago, D.; Colaprete, A.; Haberle, R.; Asphaug, E.; Sloan, L.

    2005-12-01

    One of the most intriguing signatures of surface water on Mars is large outflow channels believed to have been carved out by gigantic flood events in the late Noachian or Hesperian. We use the NASA Ames Mars General Circulation Model (MGCM) to study how abrupt eruption of water onto the Martian surface might have affected the early climate of Mars, and to calculate where the water ultimately went as part of a transient hydrologic cycle. Our model includes the emplacement of large amounts of water onto the surface of a cold, dry Mars in the vicinity of Ares Valles, with current day orbital configurations. Specifically, 106 km3 of water was released at a rate of 0.1 km3/s at end of Northern Hemisphere summer. We have begun modeling with the MGCM with outflow water and cloud physics. The current cloud physics include cloud particle nucleation and growth, with radiative effects added at a later date. These results are being compared with a control case with no outflow in the model, and a case with water, but without clouds. In all cases we are examining the radiative effects of water vapor, albedo effects of water ice, and latent heat effects for this large influx of water. Preliminary results show differences between these three cases, but the factors that are causing these differences have not yet been determined. These results will be interesting to compare with studies that suggest significant, but possibly localized or regional, precipitation in the Hesperian, as opposed to the more widely recognized precipitation during the Noachian. Current analyses and longer model runs will allow us to calculate the specific effects of outflow water on past Martian climate, as well as where the water might have ended up.

  19. Outflow Channels and Martian Climate: General Circulation Model (GCM) Simulations with Emplaced Water

    NASA Astrophysics Data System (ADS)

    Santiago, D.; Colaprete, A.; Haberle, R.; Asphaug, E.; Sloan, L.

    2005-08-01

    The existence of past surface water on Mars has been inferred on the basis of geomorphologic interpretation of spacecraft images. Among the most intriguing signatures of surface water are large outflow channels believed to have been carved out by gigantic flood events in the late Noachian or Hesperian. We use the NASA Ames Mars General Circulation Model (MGCM) to study how abrupt eruption of water onto the Martian surface might have affected climate, and to consider where the water ultimately went. Our initial model begins by emplacing large amounts of water onto the surface of Mars in the vicinity of Ares Valley, for current day orbital configurations. Specifically, 10\\^6 km\\^3 of water was released at a rate of 0.1 km\\^3/s at end of Northern summer. The MGCM was run for 10 years; a control version, without water, was run the same length of time, in order to assess the climatic impact from the radiative and thermal effects of the released water. Model modifications for the results that will be presented include (1) a customized sublimation scheme, (2) latent heat effects of water transitions, (3) radiative effects of water vapor, (4) albedo effects, and (5) clouds. Preliminary results indicate slight surface temperature increases due to latent heating is areas of water deposition, and cooling in the outflow formation area. Results also suggest that water vapor is distributed throughout the atmosphere. Results for these and other atmospheric variables, as well as water tracer distribution, will be presented. We acknowledge the University Aligned Research Center and the Mars Fundamental Research Program for their funding contributions.

  20. Winter polar warmings and the meridional transport on Mars simulated with a general circulation model

    NASA Astrophysics Data System (ADS)

    Medvedev, Alexander S.; Hartogh, Paul

    2007-01-01

    Winter polar warmings in the middle atmosphere of Mars occur due to the adiabatic heating associated with the downward branch of the cross-equatorial meridional circulation. Thus, they are the manifestation of the global meridional transport rather than of local radiative effects. We report on a series of numerical experiments with a recently developed general circulation model of the martian atmosphere to examine the relative roles of the mechanical and thermal forcing in the meridional transport. The experiments were focused on answering the question of whether the martian circulation is consistent with the thermally driven nearly inviscid Hadley cell, as was pointed out by some previous studies, or it is forced mainly by zonally asymmetric eddies. It is demonstrated that, under realistic conditions in the middle atmosphere, the meridional transport is maintained primarily by dissipating large-scale planetary waves and solar tides. This mechanism is similar to the "extratropical pump" in the middle atmosphere on Earth. Only in the run with artificially weak zonal disturbances, was the circulation reminiscent of thermally induced Hadley cells. In the experiment with an imposed dust storm, the modified atmospheric refraction changes the vertical propagation of the eddies. As the result, the Eliassen-Palm fluxes convergence increases in high winter latitudes of the middle atmosphere, the meridional transport gets stronger, and the polar temperature rises. Additional numerical experiments demonstrated that insufficient model resolution, increased numerical dissipation, and, especially, neglect of non-LTE effects for the 15 μm CO 2 band could weaken the meridional transport and the magnitude of polar warmings in GCMs.

  1. Interannual Variability of Martian Global Dust Storms: Simulations with a Low-Order Model of the General Circulation

    NASA Technical Reports Server (NTRS)

    Pankine, A. A.; Ingersoll, Andrew P.

    2002-01-01

    We present simulations of the interannual variability of martian global dust storms (GDSs) with a simplified low-order model (LOM) of the general circulation. The simplified model allows one to conduct computationally fast long-term simulations of the martian climate system. The LOM is constructed by Galerkin projection of a 2D (zonally averaged) general circulation model (GCM) onto a truncated set of basis functions. The resulting LOM consists of 12 coupled nonlinear ordinary differential equations describing atmospheric dynamics and dust transport within the Hadley cell. The forcing of the model is described by simplified physics based on Newtonian cooling and Rayleigh friction. The atmosphere and surface are coupled: atmospheric heating depends on the dustiness of the atmosphere, and the surface dust source depends on the strength of the atmospheric winds. Parameters of the model are tuned to fit the output of the NASA AMES GCM and the fit is generally very good. Interannual variability of GDSs is possible in the IBM, but only when stochastic forcing is added to the model. The stochastic forcing could be provided by transient weather systems or some surface process such as redistribution of the sand particles in storm generating zones on the surface. The results are sensitive to the value of the saltation threshold, which hints at a possible feedback between saltation threshold and dust storm activity. According to this hypothesis, erodable material builds up its a result of a local process, whose effect is to lower the saltation threshold until a GDS occurs. The saltation threshold adjusts its value so that dust storms are barely able to occur.

  2. Numerical simulation of 137Cs and (239,240)Pu concentrations by an ocean general circulation model.

    PubMed

    Tsumune, Daisuke; Aoyama, Michio; Hirose, Katsumi

    2003-01-01

    We simulated the spatial distributions and the temporal variations of 137Cs and (239,240)Pu concentrations in the ocean by using the ocean general circulation model which was developed by National Center of Atmospheric Research. These nuclides are introduced into seawaters from global fallout due to atmospheric nuclear weapons tests. The distribution of radioactive deposition on the world ocean is estimated from global precipitation data and observed values of annual deposition of radionuclides at the Meteorological Research Institute in Japan and several observed points in New Zealand. Radionuclides from global fallout have been transported by advection, diffusion and scavenging, and this concentration reduces by radioactive decay in the ocean. We verified the results of the model calculations by comparing simulated values of 137Cs and (239,240)Pu in seawater with the observed values included in the Historical Artificial Radionuclides in the HAM database, which has been constructed by the Meteorological Research Institute. The vertical distributions of the calculated 137Cs concentrations were in good agreement and are in good agreement with the observed profiles in the 1960s up to 250 m, in the 1970s up to 500 m, in the 1980s up to 750 m and in the 1990s up to 750 m. However, the calculated 137Cs concentrations were underestimated compared with the observed 137Cs at the deeper layer. This may suggest other transport processes of 137Cs to deep waters. The horizontal distributions of 137Cs concentrations in surface water could be simulated. A numerical tracer release experiment was performed to explain the horizontal distribution pattern. A maximum (239,240)Pu concentration layer occurs at an intermediate depth for both observed and calculated values, which is formed by particle scavenging. The horizontal distributions of the calculated (239,240)Pu concentrations in surface water could be simulated by considering the scavenging effect.

  3. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  4. Work Practice Simulation of Complex Human-Automation Systems in Safety Critical Situations: The Brahms Generalized berlingen Model

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Linde, Charlotte; Seah, Chin; Shafto, Michael

    2013-01-01

    The transition from the current air traffic system to the next generation air traffic system will require the introduction of new automated systems, including transferring some functions from air traffic controllers to on­-board automation. This report describes a new design verification and validation (V&V) methodology for assessing aviation safety. The approach involves a detailed computer simulation of work practices that includes people interacting with flight-critical systems. The research is part of an effort to develop new modeling and verification methodologies that can assess the safety of flight-critical systems, system configurations, and operational concepts. The 2002 Ueberlingen mid-air collision was chosen for analysis and modeling because one of the main causes of the accident was one crew's response to a conflict between the instructions of the air traffic controller and the instructions of TCAS, an automated Traffic Alert and Collision Avoidance System on-board warning system. It thus furnishes an example of the problem of authority versus autonomy. It provides a starting point for exploring authority/autonomy conflict in the larger system of organization, tools, and practices in which the participants' moment-by-moment actions take place. We have developed a general air traffic system model (not a specific simulation of Überlingen events), called the Brahms Generalized Ueberlingen Model (Brahms-GUeM). Brahms is a multi-agent simulation system that models people, tools, facilities/vehicles, and geography to simulate the current air transportation system as a collection of distributed, interactive subsystems (e.g., airports, air-traffic control towers and personnel, aircraft, automated flight systems and air-traffic tools, instruments, crew). Brahms-GUeM can be configured in different ways, called scenarios, such that anomalous events that contributed to the Überlingen accident can be modeled as functioning according to requirements or in an

  5. How to Hold a Model Legislature: A Simulation of the Georgia General Assembly, Teacher's Manual [And] The Model Legislature, Student's Kit.

    ERIC Educational Resources Information Center

    Jackson, Edwin L.

    The student's kit and teacher's manual provide a framework for secondary students to simulate the functionings of Georgia's General Assembly. Objectives of the simulation are to help students: (1) experience the forces and conflicts involved in lawmaking, (2) learn about the role of legislators, (3) understand and discuss issues facing citizens,…

  6. Simulating pathways of subsurface oil in the Faroe-Shetland Channel using an ocean general circulation model.

    PubMed

    Main, C E; Yool, A; Holliday, N P; Popova, E E; Jones, D O B; Ruhl, H A

    2017-01-15

    Little is known about the fate of subsurface hydrocarbon plumes from deep-sea oil well blowouts and their effects on processes and communities. As deepwater drilling expands in the Faroe-Shetland Channel (FSC), oil well blowouts are a possibility, and the unusual ocean circulation of this region presents challenges to understanding possible subsurface oil pathways in the event of a spill. Here, an ocean general circulation model was used with a particle tracking algorithm to assess temporal variability of the oil-plume distribution from a deep-sea oil well blowout in the FSC. The drift of particles was first tracked for one year following release. Then, ambient model temperatures were used to simulate temperature-mediated biodegradation, truncating the trajectories of particles accordingly. Release depth of the modeled subsurface plumes affected both their direction of transport and distance travelled from their release location, and there was considerable interannual variability in transport.

  7. A general-purpose framework to simulate musculoskeletal system of human body: using a motion tracking approach.

    PubMed

    Ehsani, Hossein; Rostami, Mostafa; Gudarzi, Mohammad

    2016-02-01

    Computation of muscle force patterns that produce specified movements of muscle-actuated dynamic models is an important and challenging problem. This problem is an undetermined one, and then a proper optimization is required to calculate muscle forces. The purpose of this paper is to develop a general model for calculating all muscle activation and force patterns in an arbitrary human body movement. For this aim, the equations of a multibody system forward dynamics, which is considered for skeletal system of the human body model, is derived using Lagrange-Euler formulation. Next, muscle contraction dynamics is added to this model and forward dynamics of an arbitrary musculoskeletal system is obtained. For optimization purpose, the obtained model is used in computed muscle control algorithm, and a closed-loop system for tracking desired motions is derived. Finally, a popular sport exercise, biceps curl, is simulated by using this algorithm and the validity of the obtained results is evaluated via EMG signals.

  8. Modeling of Compressible Flow with Friction and Heat Transfer Using the Generalized Fluid System Simulation Program (GFSSP)

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak; Majumdar, Alok

    2007-01-01

    The present paper describes the verification and validation of a quasi one-dimensional pressure based finite volume algorithm, implemented in Generalized Fluid System Simulation Program (GFSSP), for predicting compressible flow with friction, heat transfer and area change. The numerical predictions were compared with two classical solutions of compressible flow, i.e. Fanno and Rayleigh flow. Fanno flow provides an analytical solution of compressible flow in a long slender pipe where incoming subsonic flow can be choked due to friction. On the other hand, Raleigh flow provides analytical solution of frictionless compressible flow with heat transfer where incoming subsonic flow can be choked at the outlet boundary with heat addition to the control volume. Nonuniform grid distribution improves the accuracy of numerical prediction. A benchmark numerical solution of compressible flow in a converging-diverging nozzle with friction and heat transfer has been developed to verify GFSSP's numerical predictions. The numerical predictions compare favorably in all cases.

  9. EARTHQUAKE RESPONSE ANALYSIS OF STEEL PORTAL FRAMES BY PSEUDODYNAMIC SIMULATION TECHNIQUE USING A GENERAL-PURPOSE FINITE ELEMENT ANALYSIS PROGRAM

    NASA Astrophysics Data System (ADS)

    Miki, Toshihiro; Mizusawa, Tomisaku; Yamada, Osamu; Toda, Tomoki

    This paper studies the earthquake response of steel portal frames when the shear collapse occurs at the centre of the beam. The pseudodynamic simulation technique for the earthquake response analysis of the frames is developed in correspondence to the pseudodynamic substructure testing method. For the thin-walled box element under shear force in the middle of beam, the numerical process is utilized by a general-purpose finite element analysis program. The numerical results show the shear collapse behaviour in stiffened box beams and corresponding restoring force - displacement relationship of frames. The advantages of shear collapse of beams for the use in frames during earthquakes are discussed from the point of view of the hysteretic energy dissipated by the column base.

  10. Use of Generalized Fluid System Simulation Program (GFSSP) for Teaching and Performing Senior Design Projects at the Educational Institutions

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; Hedayat, A.

    2015-01-01

    This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects.

  11. Binding constants of membrane-anchored receptors and ligands: A general theory corroborated by Monte Carlo simulations.

    PubMed

    Xu, Guang-Kui; Hu, Jinglei; Lipowsky, Reinhard; Weikl, Thomas R

    2015-12-28

    Adhesion processes of biological membranes that enclose cells and cellular organelles are essential for immune responses, tissue formation, and signaling. These processes depend sensitively on the binding constant K2D of the membrane-anchored receptor and ligand proteins that mediate adhesion, which is difficult to measure in the "two-dimensional" (2D) membrane environment of the proteins. An important problem therefore is to relate K2D to the binding constant K3D of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in three dimensions (3D). In this article, we present a general theory for the binding constants K2D and K3D of rather stiff proteins whose main degrees of freedom are translation and rotation, along membranes and around anchor points "in 2D," or unconstrained "in 3D." The theory generalizes previous results by describing how K2D depends both on the average separation and thermal nanoscale roughness of the apposing membranes, and on the length and anchoring flexibility of the receptors and ligands. Our theoretical results for the ratio K2D/K3D of the binding constants agree with detailed results from Monte Carlo simulations without any data fitting, which indicates that the theory captures the essential features of the "dimensionality reduction" due to membrane anchoring. In our Monte Carlo simulations, we consider a novel coarse-grained model of biomembrane adhesion in which the membranes are represented as discretized elastic surfaces, and the receptors and ligands as anchored molecules that diffuse continuously along the membranes and rotate at their anchor points.

  12. Dynamical simulation of gravothermal catastrophe.

    PubMed

    Klinko, Peter; Miller, Bruce N

    2004-01-16

    We investigate the dynamical evolution of gravothermal catastrophe in a model of a spherical cluster where, besides the energy and angular momentum, an additional integral of motion is also taken into account. Using dynamical simulation, we study a system of concentric, rotating, spherical shells employing a precise, event-driven, algorithm that permits the controlled exchange of internal angular momentum. Initially the system starts to relax to a locally stable state that is in good agreement with mean field predictions. This is followed by core collapse with the development of a core-halo structure and gravothermal oscillation.

  13. Extension of nTRACER high fidelity transport code for boiling water reactor simulations and general geometry modeling

    NASA Astrophysics Data System (ADS)

    Hader, Jacob S.

    One of the current limitations of high fidelity deterministic codes for performing light water reactor analyses is modeling the detailed and realistic geometry and material distributions within the reactor core. Additionally, as the computational environment continues to evolve, it is expected that these high fidelity codes will become integral to the reactor design process. As a way to facilitate the continued development of nTRACER, a high-fidelity method of characteristics based neutron transport solver, work has been performed to extend both its geometry modeling and simulation capabilities. In this work, a procedure for generalizing the geometry modeling in nTRACER was developed and an automated process for modeling arbitrary boiling water reactor geometry was created. Additionally, a one-dimensional drift-flux model was implemented into the existing nTRACER framework to account for two-phase flow and its effects on the coolant density change within the core. To verify the accuracy of the extended geometry module, the eigenvalues and spatial flux distributions of the 2-D/3-D C5G7 MOX benchmark problems were compared against the pre-existing, built-in nTRACER geometry module. Finally, verification of the boiling water reactor simulations was done by comparing results for a series of 2-D pin cells and 2-D assemblies between nTRACER and MCNP6.

  14. Debris Flow Simulation using FLO-2D on the 2004 Landslide Area of Real, General Nakar, and Infanta, Philippines

    NASA Astrophysics Data System (ADS)

    Llanes, F.; dela Resma, M.; Ferrer, P.; Realino, V.; Aquino, D. T.; Eco, R. C.; Lagmay, A.

    2013-12-01

    From November 14 to December 3, 2004, Luzon Island was ravaged by 4 successive typhoons: Typhoon Mufia, Tropical Storm Merbok, Tropical Depression Winnie, and Super Typhoon Nanmadol. Tropical Depression Winnie was the most destructive of the four when it triggered landslides on November 29 that devastated the municipalities of Infanta, General Nakar, and Real in Quezon Province, southeast Luzon. Winnie formed east of Central Luzon on November 27 before it moved west-northwestward over southeastern Luzon on November 29. A total of 1,068 lives were lost and more than USD 170 million worth of damages to crops and infrastructure were incurred from the landslides triggered by Typhoon Winnie on November 29 and the flooding caused by the 4 typhoons. FLO-2D, a flood routing software for generating flood and debris flow hazard maps, was utilized to simulate the debris flows that could potentially affect the study area. Based from the rainfall intensity-duration-frequency analysis, the cumulative rainfall from typhoon Winnie on November 29 which was approximately 342 mm over a 9-hour period was classified within a 100-year return period. The Infanta station of the Philippine Atmospheric Geophysical and Astronomical Services Administration (PAGASA) was no longer able to measure the amount of rainfall after this period because the rain gauge in that station was washed away by floods. Rainfall data with a 100-year return period was simulated over the watersheds delineated from a SAR-derived digital elevation model. The resulting debris flow hazard map was compared with results from field investigation and previous studies made on the landslide event. The simulation identified 22 barangays (villages) with a total of 45,155 people at risk of turbulent flow and flooding.

  15. General circulation and thermal structure simulated by a Venus AGCM with a two-stream radiative code

    NASA Astrophysics Data System (ADS)

    Yamamoto, Masaru; Ikeda, Kohei; Takahashi, Masaaki

    2016-10-01

    Atmospheric general circulation model (AGCM) is expected to be a powerful tool for understanding Venus climate and atmospheric dynamics. At the present stage, however, the full-physics model is under development. Ikeda (2011) developed a two-stream radiative transfer code, which covers the solar to infrared radiative processes due to the gases and aerosol particles. The radiative code was applied to Venus AGCM (T21L52) at Atmosphere and Ocean Research Institute, Univ. Tokyo. We analyzed the results in a few Venus days simulation that was restarted after nudging zonal wind to a super-rotating state until the equilibrium. The simulated thermal structure has low-stability layer around 105 Pa at low latitudes, and the neutral stability extends from ˜105 Pa to the lower atmosphere at high latitudes. At the equatorial cloud top, the temperature lowers in the region between noon and evening terminator. For zonal and meridional winds, we can see difference between the zonal and day-side means. As was indicated in previous works, the day-side mean meridional wind speed mostly corresponds to the poleward component of the thermal tide and is much higher than the zonal mean. Toward understanding dynamical roles of waves in UV cloud tracking and brightness, we calculated the eddy heat and momentum fluxes averaged over the day-side hemisphere. The eddy heat and momentum fluxes are poleward in the poleward flank of the jet. In contrast, the fluxes are relatively weak and equatorward at low latitudes. The eddy momentum flux becomes equatorward in the dynamical situation that the simulated equatorial wind is weaker than the midlatitude jet. The sensitivity to the zonal flow used for the nudging will be also discussed in the model validation.

  16. The variability, structure and energy conversion of the northern hemisphere traveling waves simulated in a Mars general circulation model

    NASA Astrophysics Data System (ADS)

    Wang, Huiqun; Toigo, Anthony D.

    2016-06-01

    Investigations of the variability, structure and energetics of the m = 1-3 traveling waves in the northern hemisphere of Mars are conducted with the MarsWRF general circulation model. Using a simple, annually repeatable dust scenario, the model reproduces many general characteristics of the observed traveling waves. The simulated m = 1 and m = 3 traveling waves show large differences in terms of their structures and energetics. For each representative wave mode, the geopotential signature maximizes at a higher altitude than the temperature signature, and the wave energetics suggests a mixed baroclinic-barotropic nature. There is a large contrast in wave energetics between the near-surface and higher altitudes, as well as between the lower latitudes and higher latitudes at high altitudes. Both barotropic and baroclinic conversions can act as either sources or sinks of eddy kinetic energy. Band-pass filtered transient eddies exhibit strong zonal variations in eddy kinetic energy and various energy transfer terms. Transient eddies are mainly interacting with the time mean flow. However, there appear to be non-negligible wave-wave interactions associated with wave mode transitions. These interactions include those between traveling waves and thermal tides and those among traveling waves.

  17. Generalized Fluid System Simulation Program, Version 5.0-Educational. Supplemental Information for NASA/TM-2011-216470. Supplement

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.

    2011-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the point, drag and click method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids and 21 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 12 demonstrated example problems. This supplement gives the input and output data files for the examples.

  18. Streamflow changes in the Sierra Nevada, California, simulated using a statistically downscaled general circulation model scenario of climate change

    USGS Publications Warehouse

    Wilby, Robert L.; Dettinger, Michael D.

    2000-01-01

    Simulations of future climate using general circulation models (GCMs) suggest that rising concentrations of greenhouse gases may have significant consequences for the global climate. Of less certainty is the extent to which regional scale (i.e., sub-GCM grid) environmental processes will be affected. In this chapter, a range of downscaling techniques are critiqued. Then a relatively simple (yet robust) statistical downscaling technique and its use in the modelling of future runoff scenarios for three river basins in the Sierra Nevada, California, is described. This region was selected because GCM experiments driven by combined greenhouse-gas and sulphate-aerosol forcings consistently show major changes in the hydro-climate of the southwest United States by the end of the 21st century. The regression-based downscaling method was used to simulate daily rainfall and temperature series for streamflow modelling in three Californian river basins under current-and future-climate conditions. The downscaling involved just three predictor variables (specific humidity, zonal velocity component of airflow, and 500 hPa geopotential heights) supplied by the U.K. Meteorological Office couple ocean-atmosphere model (HadCM2) for the grid point nearest the target basins. When evaluated using independent data, the model showed reasonable skill at reproducing observed area-average precipitation, temperature, and concomitant streamflow variations. Overall, the downscaled data resulted in slight underestimates of mean annual streamflow due to underestimates of precipitation in spring and positive temperature biases in winter. Differences in the skill of simulated streamflows amongst the three basins were attributed to the smoothing effects of snowpack on streamflow responses to climate forcing. The Merced and American River basins drain the western, windward slope of the Sierra Nevada and are snowmelt dominated, whereas the Carson River drains the eastern, leeward slope and is a mix of

  19. Simulation of a dust episode over Eastern Mediterranean using a high-resolution atmospheric chemistry general circulation model

    NASA Astrophysics Data System (ADS)

    Abdel Kader, Mohamed; Zittis, Georgios; Astitha, Marina; Lelieveld, Jos; Tymvios, Fillipos

    2013-04-01

    An extended episode of low visibility took place over the Eastern Mediterranean in late September 2011, caused by a strong increase in dust concentrations, analyzed from observations of PM10 (Particulate Matter with <10μm in diameter). A high-resolution version of the atmospheric chemistry general circulation model EMAC (ECHAM5/Messy2.41 Atmospheric Chemistry) was used to simulate the emissions, transport and deposition of airborne desert dust. The model configuration involves the spectral resolution of T255 (0.5°, ~50Km) and 31 vertical levels in the troposphere and lower stratosphere. The model was nudged towards ERA40 reanalysis data to represent the actual meteorological conditions. The dust emissions were calculated online at each model time step and the aerosol microphysics using the GMXe submodel (Global Modal-aerosol eXtension). The model includes a sulphur chemistry mechanism to simulate the transformation of the dust particles from the insoluble (at emission) to soluble modes, which promotes dust removal by precipitation. The model successfully reproduces the dust distribution according to observations by the MODIS satellite instruments and ground-based AERONET stations. The PM10 concentration is also compared with in-situ measurements over Cyprus, resulting in good agreement. The model results show two subsequent dust events originating from the Negev and Sahara deserts. The first dust event resulted from the transport of dust from the Sahara on the 21st of September and lasted only briefly (hours) as the dust particles were efficiently removed by precipitation simulated by the model and observed by the TRMM (Tropical Rainfall Measuring Mission) satellites. The second event resulted from dust transport from the Negev desert to the Eastern Mediterranean during the period 26th - 30th September with a peak concentration at 2500m elevation. This event lasted for four days and diminished due to dry deposition. The observed reduced visibility over Cyprus

  20. One-step leapfrog ADI-FDTD method for simulating electromagnetic wave propagation in general dispersive media.

    PubMed

    Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David

    2013-09-09

    The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.

  1. Extension of the CHARMM General Force Field to Sulfonyl-Containing Compounds and Its Utility in Biomolecular Simulations

    PubMed Central

    Yu, Wenbo; He, Xibing; Vanommeslaeghe, Kenno; MacKerell, Alexander D.

    2012-01-01

    Presented is an extension of the CHARMM General force field (CGenFF) to enable the modeling of sulfonyl-containing compounds. Model compounds containing chemical moieties such as sulfone, sulfonamide, sulfonate and sulfamate were used as the basis for the parameter optimization. Targeting high-level quantum mechanical and experimental crystal data, the new parameters were optimized in a hierarchical fashion designed to maintain compatibility with the remainder of the CHARMM additive force field. The optimized parameters satisfactorily reproduced equilibrium geometries, vibrational frequencies, interactions with water, gas phase dipole moments and dihedral potential energy scans. Validation involved both crystalline and liquid phase calculations showing the newly developed parameters to satisfactorily reproduce experimental unit cell geometries, crystal intramolecular geometries and pure solvent densities. The force field was subsequently applied to study conformational preference of a sulfonamide based peptide system. Good agreement with experimental IR/NMR data further validated the newly developed CGenFF parameters as a tool to investigate the dynamic behavior of sulfonyl groups in a biological environment. CGenFF now covers sulfonyl group containing moieties allowing for modeling and simulation of sulfonyl-containing compounds in the context of biomolecular systems including compounds of medicinal interest. PMID:22821581

  2. New insights into the generalized Rutherford equation for nonlinear neoclassical tearing mode growth from 2D reduced MHD simulations

    NASA Astrophysics Data System (ADS)

    Westerhof, E.; de Blank, H. J.; Pratt, J.

    2016-03-01

    Two dimensional reduced MHD simulations of neoclassical tearing mode growth and suppression by ECCD are performed. The perturbation of the bootstrap current density and the EC drive current density perturbation are assumed to be functions of the perturbed flux surfaces. In the case of ECCD, this implies that the applied power is flux surface averaged to obtain the EC driven current density distribution. The results are consistent with predictions from the generalized Rutherford equation using common expressions for Δ \\text{bs}\\prime and Δ \\text{ECCD}\\prime . These expressions are commonly perceived to describe only the effect on the tearing mode growth of the helical component of the respective current perturbation acting through the modification of Ohm’s law. Our results show that they describe in addition the effect of the poloidally averaged current density perturbation which acts through modification of the tearing mode stability index. Except for modulated ECCD, the largest contribution to the mode growth comes from this poloidally averaged current density perturbation.

  3. Efficacy of human papillomavirus 16 and 18 (HPV-16/18) AS04-adjuvanted vaccine against cervical infection and precancer in young women: final event-driven analysis of the randomized, double-blind PATRICIA trial.

    PubMed

    Apter, Dan; Wheeler, Cosette M; Paavonen, Jorma; Castellsagué, Xavier; Garland, Suzanne M; Skinner, S Rachel; Naud, Paulo; Salmerón, Jorge; Chow, Song-Nan; Kitchener, Henry C; Teixeira, Julio C; Jaisamrarn, Unnop; Limson, Genara; Szarewski, Anne; Romanowski, Barbara; Aoki, Fred Y; Schwarz, Tino F; Poppe, Willy A J; Bosch, F Xavier; Mindel, Adrian; de Sutter, Philippe; Hardt, Karin; Zahaf, Toufik; Descamps, Dominique; Struyf, Frank; Lehtinen, Matti; Dubin, Gary

    2015-04-01

    We report final event-driven analysis data on the immunogenicity and efficacy of the human papillomavirus 16 and 18 ((HPV-16/18) AS04-adjuvanted vaccine in young women aged 15 to 25 years from the PApilloma TRIal against Cancer In young Adults (PATRICIA). The total vaccinated cohort (TVC) included all randomized participants who received at least one vaccine dose (vaccine, n = 9,319; control, n = 9,325) at months 0, 1, and/or 6. The TVC-naive (vaccine, n = 5,822; control, n = 5,819) had no evidence of high-risk HPV infection at baseline, approximating adolescent girls targeted by most HPV vaccination programs. Mean follow-up was approximately 39 months after the first vaccine dose in each cohort. At baseline, 26% of women in the TVC had evidence of past and/or current HPV-16/18 infection. HPV-16 and HPV-18 antibody titers postvaccination tended to be higher among 15- to 17-year-olds than among 18- to 25-year-olds. In the TVC, vaccine efficacy (VE) against cervical intraepithelial neoplasia grade 1 or greater (CIN1+), CIN2+, and CIN3+ associated with HPV-16/18 was 55.5% (96.1% confidence interval [CI], 43.2, 65.3), 52.8% (37.5, 64.7), and 33.6% (-1.1, 56.9). VE against CIN1+, CIN2+, and CIN3+ irrespective of HPV DNA was 21.7% (10.7, 31.4), 30.4% (16.4, 42.1), and 33.4% (9.1, 51.5) and was consistently significant only in 15- to 17-year-old women (27.4% [10.8, 40.9], 41.8% [22.3, 56.7], and 55.8% [19.2, 76.9]). In the TVC-naive, VE against CIN1+, CIN2+, and CIN3+ associated with HPV-16/18 was 96.5% (89.0, 99.4), 98.4% (90.4, 100), and 100% (64.7, 100), and irrespective of HPV DNA it was 50.1% (35.9, 61.4), 70.2% (54.7, 80.9), and 87.0% (54.9, 97.7). VE against 12-month persistent infection with HPV-16/18 was 89.9% (84.0, 94.0), and that against HPV-31/33/45/51 was 49.0% (34.7, 60.3). In conclusion, vaccinating adolescents before sexual debut has a substantial impact on the overall incidence of high-grade cervical abnormalities, and catch-up vaccination up to 18 years

  4. The 0.125 degree finite-volume General Circulation Model on the NASA Columbia Supercomputer: Preliminary Simulations of Mesoscale Vortices

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Chern, J.-D.; Reale, O.; Lin, S.-J.; Lee, T.; Chang, J.

    2005-01-01

    The NASA Columbia supercomputer was ranked second on the TOP500 List in November, 2004. Such a quantum jump in computing power provides unprecedented opportunities to conduct ultra-high resolution simulations with the finite-volume General Circulation Model (fvGCM). During 2004, the model was run in realtime experimentally at 0.25 degree resolution producing remarkable hurricane forecasts [Atlas et al., 2005]. In 2005, the horizontal resolution was further doubled, which makes the fvGCM comparable to the first mesoscale resolving General Circulation Model at the Earth Simulator Center [Ohfuchi et al., 2004]. Nine 5-day 0.125 degree simulations of three hurricanes in 2004 are presented first for model validation. Then it is shown how the model can simulate the formation of the Catalina eddies and Hawaiian lee vortices, which are generated by the interaction of the synoptic-scale flow with surface forcing, and have never been reproduced in a GCM before.)

  5. Simulator study of the stall departure characteristics of a light general aviation airplane with and without a wing-leading-edge modification

    NASA Technical Reports Server (NTRS)

    Riley, D. R.

    1985-01-01

    A six-degree-of-freedom nonlinear simulation was developed for a two-place, single-engine, low-wing general aviation airplane for the stall and initial departure regions of flight. Two configurations, one with and one without an outboard wing-leading-edge modification, were modeled. The math models developed are presented simulation predictions and flight-test data for validation purposes and simulation results for the two configurations for various maneuvers and power settings are compared to show the beneficial influence of adding the wing-leading-edge modification.

  6. Comparison of tropical pacific temperature and current simulations with two vertical mixing schemes embedded in an ocean general circulation model and reference to observations

    NASA Technical Reports Server (NTRS)

    Halpern, David; Chao, YI; Ma, Chung-Chun; Mechoso, Carlos R.

    1995-01-01

    The Pacanowski-Philander (PP) and Mellor-Yamada (MY) parameterization models of vertical mixing by turbulent processes were embedded in the Geophysical Fluid Dynamics Laboratory high-resolution ocean general circulation model of the tropical Pacific Ocean. All other facets of the numerical simulations were the same. Simulations were made for the 1987-1988 period. At the equator the MY simulation produced near-surface temperatures more uniform with depth, a deeper thermocline, a deeper core speed of the Equatorial Undercurrent, and a South Equatorial Current with greater vertical thickness compared with that computed with the PP method. Along 140 deg W, between 5 deg N and 10 deg N, both simulations were the same. Moored buoy current and temperature observations had been recorded by the Pacific Marine Environmental Laboratory at three sites (165 deg E, 140 deg W, 110 deg W) along the equator and at three sites (5 deg N, 7 deg N, 9 deg N) along 140 deg W. Simulated temperatures were lower than those observed in the near-surface layer and higher than those observed in the thermocline. Temperature simulations were in better agreement with observations compared to current simulations. At the equator, PP current and temperature simulations were more representative of the observations than MY simulations.

  7. Use of Generalized Fluid System Simulation Program (GFSSP) for Teaching and Performing Senior Design Projects at the Educational Institutions

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; Hedayat, A.

    2015-01-01

    This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects. The need for a generalized computer program for thermofluid analysis in a flow network has been felt for a long time in aerospace industries. Designers of thermofluid systems often need to know pressures, temperatures, flow rates, concentrations, and heat transfer rates at different parts of a flow circuit for steady state or transient conditions. Such applications occur in propulsion systems for tank pressurization, internal flow analysis of rocket engine turbopumps, chilldown of cryogenic tanks and transfer lines, and many other applications of gas-liquid systems involving fluid transients and conjugate heat and mass transfer. Computer resource requirements to perform time-dependent, three-dimensional Navier-Stokes computational fluid dynamic (CFD) analysis of such systems are prohibitive and therefore are not practical. Available commercial codes are generally suitable for steady state, single-phase incompressible flow. Because of the proprietary nature of such codes, it is not possible to extend their capability to satisfy the above-mentioned needs. Therefore, the Generalized Fluid System Simulation Program (GFSSP1) has been developed at NASA

  8. Cross-diffusion-driven hydrodynamic instabilities in a double-layer system: General classification and nonlinear simulations

    NASA Astrophysics Data System (ADS)

    Budroni, M. A.

    2015-12-01

    Cross diffusion, whereby a flux of a given species entrains the diffusive transport of another species, can trigger buoyancy-driven hydrodynamic instabilities at the interface of initially stable stratifications. Starting from a simple three-component case, we introduce a theoretical framework to classify cross-diffusion-induced hydrodynamic phenomena in two-layer stratifications under the action of the gravitational field. A cross-diffusion-convection (CDC) model is derived by coupling the fickian diffusion formalism to Stokes equations. In order to isolate the effect of cross-diffusion in the convective destabilization of a double-layer system, we impose a starting concentration jump of one species in the bottom layer while the other one is homogeneously distributed over the spatial domain. This initial configuration avoids the concurrence of classic Rayleigh-Taylor or differential-diffusion convective instabilities, and it also allows us to activate selectively the cross-diffusion feedback by which the heterogeneously distributed species influences the diffusive transport of the other species. We identify two types of hydrodynamic modes [the negative cross-diffusion-driven convection (NCC) and the positive cross-diffusion-driven convection (PCC)], corresponding to the sign of this operational cross-diffusion term. By studying the space-time density profiles along the gravitational axis we obtain analytical conditions for the onset of convection in terms of two important parameters only: the operational cross-diffusivity and the buoyancy ratio, giving the relative contribution of the two species to the global density. The general classification of the NCC and PCC scenarios in such parameter space is supported by numerical simulations of the fully nonlinear CDC problem. The resulting convective patterns compare favorably with recent experimental results found in microemulsion systems.

  9. Role and regulation of sigma S in general resistance conferred by low-shear simulated microgravity in Escherichia coli.

    PubMed

    Lynch, S V; Brodie, E L; Matin, A

    2004-12-01

    Life on Earth evolved in the presence of gravity, and thus it is of interest from the perspective of space exploration to determine if diminished gravity affects biological processes. Cultivation of Escherichia coli under low-shear simulated microgravity (SMG) conditions resulted in enhanced stress resistance in both exponential- and stationary-phase cells, making the latter superresistant. Given that microgravity of space and SMG also compromise human immune response, this phenomenon constitutes a potential threat to astronauts. As low-shear environments are encountered by pathogens on Earth as well, SMG-conferred resistance is also relevant to controlling infectious disease on this planet. The SMG effect resembles the general stress response on Earth, which makes bacteria resistant to multiple stresses; this response is sigma s dependent, irrespective of the growth phase. However, SMG-induced increased resistance was dependent on sigma s only in stationary phase, being independent of this sigma factor in exponential phase. sigma s concentration was some 30% lower in exponential-phase SMG cells than in normal gravity cells but was twofold higher in stationary-phase SMG cells. While SMG affected sigma s synthesis at all levels of control, the main reasons for the differential effect of this gravity condition on sigma s levels were that it rendered the sigma protein less stable in exponential phase and increased rpoS mRNA translational efficiency. Since sigma s regulatory processes are influenced by mRNA and protein-folding patterns, the data suggest that SMG may affect these configurations.

  10. Effects of cloud-radiative heating on atmospheric general circulation model (AGCM) simulations of convectively coupled equatorial waves

    NASA Astrophysics Data System (ADS)

    Lin, Jia-Lin; Kim, Daehyun; Lee, Myong-In; Kang, In-Sik

    2007-12-01

    This study examines the effects of cloud-radiative heating on convectively coupled equatorial waves simulated by the Seoul National University (SNU) atmospheric general circulation model (AGCM). The strength of cloud-radiative heating is adjusted by modifying the autoconversion rate needed for cloud condensates to grow up to raindrops. The results show that increasing the autoconversion rate has little effect on the climatological mean precipitation, but it significantly reduces the time-mean clouds and radiative heating in the upper troposphere and enhances heating due to moist processes in the middle troposphere. These lead to cooling of time-mean upper troposphere temperature and drying of lower-troposphere moisture. Reduction of cloud-radiative heating enhances the prominence of Kelvin and n = 0 eastward inertial gravity (EIG) waves. It also tends to enhance significantly the variance of the Kelvin, equatorial Rossby (ER), mixed Rossby-gravity (MRG), and n = 1 westward inertial gravity (WIG) waves, but not the Madden-Julian Oscillation (MJO) or n = 0 EIG wave. Reduction of cloud-radiative heating has little effect on the phase speed of the waves, which is associated with unchanged effective static stability caused by the near cancellation between reduced dry static stability and reduced diabatic heating. An important implication of this study is that when tuning GCM's top-of-the-atmosphere radiative fluxes to fit the observations, one needs to make sure that the enhancement factor of cloud-radiative heating at the intraseasonal timescale also fits with the observation so that the convectively coupled equatorial waves are not suppressed.

  11. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  12. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks.

  13. Using the Variable-Resolution General Circulation Model CAM-SE to Simulate Regional Tropical Cyclone Climatology

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.; Jablonowski, C.; Taylor, M. A.

    2012-12-01

    The ability of General Circulation Models (GCMs) to resolve tropical cyclones in the climate system has traditionally been difficult due to issues such as small storm size and the existence of key thermodynamic processes requiring significant parameterization. At traditional GCM grid resolutions of 50-300 km tropical cyclones are severely under-resolved, if not totally unresolved. Recent improvements in computational ability as well as advances in GCM model design now allow for simulations with grid spacings as small as 10-25 km. At these resolutions, models are able to more effectively capture key dynamical features of tropical cyclones. This paper explores a variable-resolution global model approach that allows for high spatial resolutions in areas of interest, such as low-latitude ocean basins where tropical cyclogenesis occurs. Such GCM designs with multi-resolution meshes serve to bridge the gap between globally uniform grids and limited area models and have the potential to become a future tool for regional climate assessments. A statically-nested, variable-resolution option has recently been introduced into the National Center for Atmospheric Research (NCAR) Community Atmosphere Model's (CAM) Spectral Element (SE) dynamical core. The SE dynamical core is also known as the 'High-Order Method Modeling Environment' (HOMME). We present aquaplanet climate experiments which showcase the ability of nested meshes to produce realistic tropical cyclones selectively in high resolution grids embedded within a global domain. We also evaluate model performance when coupled to an active land model and forced with historical sea surface temperatures by comparing multi-year results from variable-resolution CAM-SE to other globally-uniform high resolution tropical cyclone studies recently completed by the climate modeling community. Specific focus is paid to intensity profiles and track densities as well as the interannual variability in storm count in tropical regions of

  14. An adaptive maneuvering logic computer program for the simulation of one-on-one air-to-air combat. Volume 1: General description

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Fogel, L. J.; Phelps, J. P.

    1975-01-01

    A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.

  15. Effects of Tropical Cyclones on Ocean Heat Transport as simulated by a High Resolution Coupled General Circulation Model

    NASA Astrophysics Data System (ADS)

    Scoccimarro, E.; Gualdi, S.; Bellucci, A.; Sanna, A.; Vichi, M.; Manzini, E.; Fogli, P.; Navarra, A.; Oddo, P.

    2010-12-01

    Tropical cyclones (TCs) activity and their relationship with the Northern hemispheric Ocean Heat Transport (OHT) is investigated. The analysis has been performed using 20C3M (20th Century) and A1B (21st Century) IPCC scenario climate simulations obtained running a state-of-the-art atmosphere-ocean-seaice coupled global model, with high-resolution in the atmosphere. The capability of the model to reproduce a realistic TC climatology has been assessed by comparing the model results from the simulation of the 20th Century with observations. The model is able to simulate tropical cyclone-like vortices with many features similar to the observed TCs. The simulated TC activity exhibits realistic structure, geographical distribution and interannual variability, indicating that the model is able to reproduce the major basic mechanisms that link the TC activity with the large scale circulation. The TC-induced ocean cooling is well represented and the TCs activity increases significantly the poleward OHT out of the tropics, but also increases the heat transport into the deep tropics. This effect, investigated looking at the 100 most intense Northern hemisphere TCs, is strongly correlated to the TC-induced momentum flux at the surface of the ocean. TCs frequency and intensity appear to be substantially stationary through the whole 1950- 2069 period. Also the effect of the TCs induced OHT) does not significantly change during the simulated period.

  16. Tropical cyclone activity in a warmer climate as simulated by a high-resolution coupled general circulation model: changes in frequency and air-sea interaction.

    NASA Astrophysics Data System (ADS)

    Scoccimarro, Enrico; Gualdi, Silvio; Navarra, Antonio

    2010-05-01

    This study investigates the possible changes that the greenhouse global warming might generate in the characteristics of the tropical cyclones (TCs). The analysis has been performed using climate scenario simulations carried out with a fully coupled high-resolution global general circulation model (INGV-SXG) with a T106 atmospheric resolution. The capability of the model to reproduce a reasonably realistic TC climatology has been assessed by comparing the model results from a simulation of the XX Century with observations. The model appears to be able to simulate tropical cyclone-like vortices with many features similar to the observed TCs. The simulated TC activity exhibits realistic geographical distribution, seasonal modulation and interannual variability, suggesting that the model is able to reproduce the major basic mechanisms that link the TC occurrence with the large scale circulation. The results from the climate scenarios reveal a substantial general reduction of the TC frequency when the atmospheric CO2 concentration is doubled and quadrupled. The reduction appears particularly evident for the tropical north west Pacific (NWP) and north Atlantic (ATL). In the NWP the weaker TC activity seems to be associated with a reduced amount of convective instabilities. In the ATL region the weaker TC activity seems to be due to both the increased stability of the atmosphere and a stronger vertical wind shear. Despite the generally reduced TC activity, there is evidence of increased rainfall associated with the simulated cyclones. Using the new fully coupled CMCC model (CMCC_MED), with a T159 atmospheric resolution, we found a significant modulation of the Ocean Heat Transport (OHT) induced by the TC activity. Thus the possible changes that greenhouse induced global warming during 21st century might generate in the characteristics of the TC-induced OHT have been analyzed.

  17. Gradient Theory simulations of pure fluid interfaces using a generalized expression for influence parameters and a Helmholtz energy equation of state for fundamentally consistent two-phase calculations.

    PubMed

    Dahms, Rainer N

    2015-05-01

    The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phase components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. The new model preserves the accuracy of previous temperature

  18. Gradient Theory simulations of pure fluid interfaces using a generalized expression for influence parameters and a Helmholtz energy equation of state for fundamentally consistent two-phase calculations

    DOE PAGES

    Dahms, Rainer N.

    2014-12-31

    The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phasemore » components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of

  19. Gradient Theory simulations of pure fluid interfaces using a generalized expression for influence parameters and a Helmholtz energy equation of state for fundamentally consistent two-phase calculations

    SciTech Connect

    Dahms, Rainer N.

    2014-12-31

    The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phase components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of previous

  20. Generalized Temporal Acceleration Scheme for Kinetic Monte Carlo Simulations of Surface Catalytic Processes by Scaling the Rates of Fast Reactions.

    PubMed

    Dybeck, Eric Christopher; Plaisance, Craig Patrick; Neurock, Matthew

    2017-02-14

    A novel algorithm has been developed to achieve temporal acceleration during kinetic Monte Carlo (KMC) simulations of surface catalytic processes. This algorithm allows for the direct simulation of reaction networks containing kinetic processes occurring on vastly disparate timescales which computationally overburden standard KMC methods. Previously developed methods for temporal acceleration in KMC have been designed for specific systems and often require a priori information from the user such as identifying the fast and slow processes. In the approach presented herein, quasi-equilibrated processes are identified automatically based on previous executions of the forward and reverse reactions. Temporal acceleration is achieved by automatically scaling the intrinsic rate constants of the quasi-equilibrated processes, bringing their rates closer to the timescales of the slow kinetically relevant non-equilibrated processes. All reactions are still simulated directly, although with modified rate constants. Abrupt changes in the underlying dynamics of the reaction network are identified during the simulation and the reaction rate constants are rescaled accordingly. The algorithm has been utilized here to model the Fischer-Tropsch synthesis reaction over ruthenium nanoparticles. This reaction network has multiple timescale-disparate processes which would be intractable to simulate without the aid of temporal acceleration. The accelerated simulations are found to give reaction rates and selectivities indistinguishable from those calculated by an equivalent mean-field kinetic model. The computational savings of the algorithm can span many orders of magnitude in realistic systems and the computational cost is not limited by the magnitude of the timescale disparity in the system processes. Furthermore, the algorithm has been designed in a generic fashion and can easily be applied to other surface catalytic processes of interest.

  1. Baroclinic Waves and CO2 Snowfalls in Martian Winter Polar Atmosphere Simulated by a General Circulation Model

    NASA Astrophysics Data System (ADS)

    Kuroda, T.; Medvedev, A. S.; Kasaba, Y.; Hartogh, P.

    2016-09-01

    The CO2 snowfalls in winter polar atmosphere have been simulated by a MGCM. Our results show that they are strongly modulated by the synoptic dynamical features such as baroclinic planetary waves, as well as by gravity waves in smaller scale.

  2. Responses of the Tropical Pacific to Wind Forcing as Observed by Spaceborne Sensors and Simulated by an Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Tang, Qenqing; Atlas, Robert

    1996-01-01

    In this study, satellite observations, in situ measurements, and model simulations are combined to assess the oceanic response to surface wind forcing in the equatorial Pacific. The surface wind fields derived from observations by the spaceborne special sensor microwave imager (SSM/I) and from the operational products of the European Centre for Medium-Range Weather Forecasts (ECMWF) are compared. When SSM/I winds are used to force a primitive-equation ocean general circulation model (OGCM), they produce 3 C more surface cooling than ECMWF winds for the eastern equatorial Pacific during the cool phase of an El Nino-Southern Oscillation event. The stronger cooling by SSM/I winds is in good agreement with measurements at the moored buoys and observations by the advanced very high resolution radiometer, indicating that SSM/I winds are superior to ECMWF winds in forcing the tropical ocean. In comparison with measurements from buoys, tide gauges, and the Geosat altimeter, the OGCM simulates the temporal variations of temperature, steric, and sea level changes with reasonable realism when forced with the satellite winds. There are discrepancies between model simulations and observations that are common to both wind forcing fields, one of which is the simulation of zonal currents; they could be attributed to model deficiencies. By examining model simulations under two winds, vertical heat advection and uplifting of the thermocline are found to be the dominant factors in the anomalous cooling of the ocean mixed layer.

  3. A new technique for simulating composite material. Task 2: Analytical solutions with Generalized Impedance Boundary Conditions (GIBCs)

    NASA Technical Reports Server (NTRS)

    Ricoy, M. A.; Volakis, J. L.

    1989-01-01

    The diffraction problem associated with a multilayer material slab recessed in a perfectly conducting ground plane is formulated and solved via the Generalized Scattering Matrix Formulation (GSMF) in conjunction with the dual integral equation approach. The multilayer slab is replaced by a surface obeying a generalized impedance boundary condition (GIBC) to facilitate the computation of the pertinent Wiener Hopf split functions and their zeros. Both E(sub z) and H(sub z) polarizations are considered and a number of scattering patterns are presented, some of which are compared to exact results available for a homogeneous recessed slab.

  4. Implementation of a generalized actuator disk wind turbine model into the weather research and forecasting model for large-eddy simulation applications

    SciTech Connect

    Mirocha, J. D.; Kosovic, B.; Aitken, M. L.; Lundquist, J. K.

    2014-01-10

    A generalized actuator disk (GAD) wind turbine parameterization designed for large-eddy simulation (LES) applications was implemented into the Weather Research and Forecasting (WRF) model. WRF-LES with the GAD model enables numerical investigation of the effects of an operating wind turbine on and interactions with a broad range of atmospheric boundary layer phenomena. Numerical simulations using WRF-LES with the GAD model were compared with measurements obtained from the Turbine Wake and Inflow Characterization Study (TWICS-2011), the goal of which was to measure both the inflow to and wake from a 2.3-MW wind turbine. Data from a meteorological tower and two light-detection and ranging (lidar) systems, one vertically profiling and another operated over a variety of scanning modes, were utilized to obtain forcing for the simulations, and to evaluate characteristics of the simulated wakes. Simulations produced wakes with physically consistent rotation and velocity deficits. Two surface heat flux values of 20 W m–2 and 100 W m–2 were used to examine the sensitivity of the simulated wakes to convective instability. Simulations using the smaller heat flux values showed good agreement with wake deficits observed during TWICS-2011, whereas those using the larger value showed enhanced spreading and more-rapid attenuation. This study demonstrates the utility of actuator models implemented within atmospheric LES to address a range of atmospheric science and engineering applications. In conclusion, validated implementation of the GAD in a numerical weather prediction code such as WRF will enable a wide range of studies related to the interaction of wind turbines with the atmosphere and surface.

  5. Comparative Analysis of Simulated Annealing (SA) and Simplified Generalized SA (SGSA) for Estimation Optimal of Parametric Functional in CATIVIC

    SciTech Connect

    Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando

    2009-08-13

    Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.

  6. Alternatives for Mixed-Effects Meta-Regression Models in the Reliability Generalization Approach: A Simulation Study

    ERIC Educational Resources Information Center

    López-López, José Antonio; Botella, Juan; Sánchez-Meca, Julio; Marín-Martínez, Fulgencio

    2013-01-01

    Since heterogeneity between reliability coefficients is usually found in reliability generalization studies, moderator analyses constitute a crucial step for that meta-analytic approach. In this study, different procedures for conducting mixed-effects meta-regression analyses were compared. Specifically, four transformation methods for the…

  7. Generalized DSS shell for developing simulation and optimization hydro-economic models of complex water resources systems

    NASA Astrophysics Data System (ADS)

    Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin

    2013-04-01

    Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such

  8. Generalized theory and simulation of spontaneous and super-radiant emissions in electron devices and free-electron lasers.

    PubMed

    Pinhasi, Y; Lurie, Yu

    2002-02-01

    A unified formulation of spontaneous (shot-noise) and super-radiant emissions in electron devices is presented. We consider an electron beam with an arbitrary temporal current modulation propagating through the interaction region of the electronic device. The total electromagnetic field is presented as a stochastic process and expanded in terms of transverse eigenmodes of the medium (free space or waveguide), in which the field is excited and propagates. Using the waveguide excitation equations, formulated in the frequency domain, an analytical expression for the power spectral density of the electromagnetic radiation is derived. The spectrum of the excited radiation is shown to be composed of two terms, which are the spontaneous and super-radiant emissions. For a continuous, unmodulated beam, the shot noise produces only incoherent spontaneous emission of a power proportional to the flux eI(0) (DC current) of the particles in the electron beam. When the beam is modulated or prebunched, a partially coherent super-radiant emission is also produced with power proportional to the current spectrum /I(omega)/(2). Based on a three-dimensional model, a numerical particle simulation code was developed. A set of coupled-mode excitation equations in the frequency domain are solved self-consistently with the equations of particles motion. The simulation considers random distributions of density and energy in the electron beam and takes into account the statistical and spectral features of the excited radiation. At present, the code can simulate free-electron lasers (FELs) operation in various modes: spontaneous and self-amplified spontaneous emission, super-radiance and stimulated emission, in the linear and nonlinear Compton or Raman regimes. We employed the code to demonstrate spontaneous and super-radiant emission excited when a prebunched electron beam passes through a wiggler of an FEL.

  9. Global trend analysis of surface CO simulated using the global atmospheric chemistry general circulation model, EMAC (ECHAM5/MESSy)

    NASA Astrophysics Data System (ADS)

    Yoon, Jongmin; Pozzer, Andrea; Lelieveld, Jos

    2013-04-01

    Carbon monoxide (CO) is an important trace gas in tropospheric chemistry. It directly influences the concentration of tropospheric hydroxyl radical (OH), and therefore regulates the lifetimes of various tropospheric trace gases. Since anthropogenic activity produces about 60% of the annual global emission of the tropospheric CO, temporal trend analysis of surface CO is needed to understand the increasing (decreasing) influence of humans on the cleansing capacity of the atmosphere. In this study, the global trend of surface CO from 2001 to 2010 was estimated using the EMAC (ECHAM5/MESSy for Atmospheric Chemistry) model. The simulation is based on the emission scenario based on RCP8.5 (Representative Concentration Pathways). The global EMAC simulations of monthly surface CO are evaluated with monthly MOPITT (Measurements Of Pollution In The Troposphere) observations (i.e. MOP03TM), and the spatial correlations range from 0.87 to 0.97. The simulated trends are compared with the data from a global surface CO monitoring network, the World Data Centre for Greenhouse Gases (WDCGG), which includes also the NOAA/CMDL (Climate Monitoring and Diagnostic Laboratory of the National Oceanic and Atmospheric Administration) Cooperative Air Sampling Network. Over the United States and Western Europe, the significant decreases of surface CO are estimated at -49.7±2.7 and -38.6±2.7 ppbv per decade. In contrast, the surface CO increased by +12.4±10.2 and +7.2±3.7 ppbv per decade over South America and South Africa, respectively.

  10. Assessing the ability of isotope-enabled General Circulation Models to simulate the variability of Iceland water vapor isotopic composition

    NASA Astrophysics Data System (ADS)

    Erla Sveinbjornsdottir, Arny; Steen-Larsen, Hans Christian; Jonsson, Thorsteinn; Ritter, Francois; Riser, Camilla; Messon-Delmotte, Valerie; Bonne, Jean Louis; Dahl-Jensen, Dorthe

    2014-05-01

    During the fall of 2010 we installed an autonomous water vapor spectroscopy laser (Los Gatos Research analyzer) in a lighthouse on the Southwest coast of Iceland (63.83°N, 21.47°W). Despite initial significant problems with volcanic ash, high wind, and attack of sea gulls, the system has been continuously operational since the end of 2011 with limited down time. The system automatically performs calibration every 2 hours, which results in high accuracy and precision allowing for analysis of the second order parameter, d-excess, in the water vapor. We find a strong linear relationship between d-excess and local relative humidity (RH) when normalized to SST. The observed slope of approximately -45 o/oo/% is similar to theoretical predictions by Merlivat and Jouzel [1979] for smooth surface, but the calculated intercept is significant lower than predicted. Despite this good linear agreement with theoretical calculations, mismatches arise between the simulated seasonal cycle of water vapour isotopic composition using LMDZiso GCM nudged to large-scale winds from atmospheric analyses, and our data. The GCM is not able to capture seasonal variations in local RH, nor seasonal variations in d-excess. Based on daily data, the performance of LMDZiso to resolve day-to-day variability is measured based on the strength of the correlation coefficient between observations and model outputs. This correlation coefficient reaches ~0.8 for surface absolute humidity, but decreases to ~0.6 for δD and ~0.45 d-excess. Moreover, the magnitude of day-to-day humidity variations is also underestimated by LMDZiso, which can explain the underestimated magnitude of isotopic depletion. Finally, the simulated and observed d-excess vs. RH has similar slopes. We conclude that the under-estimation of d-excess variability may partly arise from the poor performance of the humidity simulations.

  11. GARROTXA Cosmological Simulations of Milky Way-sized Galaxies: General Properties, Hot-gas Distribution, and Missing Baryons

    NASA Astrophysics Data System (ADS)

    Roca-Fàbrega, Santi; Valenzuela, Octavio; Colín, Pedro; Figueras, Francesca; Krongold, Yair; Velázquez, Héctor; Avila-Reese, Vladimir; Ibarra-Medel, Hector

    2016-06-01

    We introduce a new set of simulations of Milky Way (MW)-sized galaxies using the AMR code ART + hydrodynamics in a Λ cold dark matter cosmogony. The simulation series is called GARROTXA and it follows the formation of a halo/galaxy from z = 60 to z = 0. The final virial mass of the system is ˜7.4 × 1011 M ⊙. Our results are as follows. (a) Contrary to many previous studies, the circular velocity curve shows no central peak and overall agrees with recent MW observations. (b) Other quantities, such as M\\_\\ast (6 × 1010 M ⊙) and R d (2.56 kpc), fall well inside the observational MW range. (c) We measure the disk-to-total ratio kinematically and find that D/T = 0.42. (d) The cold-gas fraction and star formation rate at z = 0, on the other hand, fall short of the values estimated for the MW. As a first scientific exploitation of the simulation series, we study the spatial distribution of hot X-ray luminous gas. We have found that most of this X-ray emitting gas is in a halo-like distribution accounting for an important fraction but not all of the missing baryons. An important amount of hot gas is also present in filaments. In all our models there is not a massive disk-like hot-gas distribution dominating the column density. Our analysis of hot-gas mock observations reveals that the homogeneity assumption leads to an overestimation of the total mass by factors of 3-5 or to an underestimation by factors of 0.7-0.1, depending on the used observational method. Finally, we confirm a clear correlation between the total hot-gas mass and the dark matter halo mass of galactic systems.

  12. Climate and Habitability of Kepler 452b Simulated with a Fully Coupled Atmosphere–Ocean General Circulation Model

    NASA Astrophysics Data System (ADS)

    Hu, Yongyun; Wang, Yuwei; Liu, Yonggang; Yang, Jun

    2017-01-01

    The discovery of Kepler 452b is a milestone in searching for habitable exoplanets. While it has been suggested that Kepler 452b is the first Earth-like exoplanet discovered in the habitable zone of a Sun-like star, its climate states and habitability require quantitative studies. Here, we first use a three-dimensional fully coupled atmosphere–ocean climate model to study the climate and habitability of an exoplanet around a Sun-like star. Our simulations show that Kepler 452b is habitable if CO2 concentrations in its atmosphere are comparable or lower than that in the present-day Earth atmosphere. However, our simulations also suggest that Kepler 452b can become too hot to be habitable if there is the lack of silicate weathering to limit CO2 concentrations in the atmosphere. We also address whether Kepler 452b could retain its water inventory after 6.0 billion years of lifetime. These results in the present Letter will provide insights about climate and habitability for other undiscovered exoplanets similar to Kepler 452b, which may be observable by future observational missions.

  13. Simulation of a hump structure in the optical scattering rate within a generalized Allen formalism and its application to copper oxide systems.

    PubMed

    Hwang, Jungseek

    2013-07-24

    We propose a possible way to simulate a hump structure in the optical scattering rate. The optical scattering rate of correlated charge carriers can be defined within an extended Drude model formalism. When some electron- and hole-doped copper oxide systems are in spin density or charge density wave phases they show hump structures in their optical scattering rates. The hump structures have not yet been simulated or clearly understood. We are able to simulate the hump structure by using a peak followed by a dip feature in the normalized density of states within a generalized Allen formalism. We observe that reversing the order of the dip and peak gives completely different features in the optical scattering rate; a peak-dip (dip-peak) results in a hump (a valley) in the scattering rate. We also obtain the real parts of the optical conductivity and reflectance spectra from the simulated optical scattering rate and compare them with published experimental spectra. From these comparisons we conclude that the peak-dip order can give the hump structure that is observed experimentally in copper oxide systems. Finally we fit two published optical spectra with our new model and discuss our results and the possible origin of the dip or peak features in the normalized density of states.

  14. The balance of kinetic and total energy simulated by the OSU two-level atmospheric general circulation model for January and July

    NASA Technical Reports Server (NTRS)

    Wang, J.-T.; Gates, W. L.; Kim, J.-W.

    1984-01-01

    A three-year simulation which prescribes seasonally varying solar radiation and sea surface temperature is the basis of the present study of the horizontal structure of the balances of kinetic and total energy simulated by Oregon State University's two-level atmospheric general circulation model. Mechanisms responsible for the local energy changes are identified, and the energy balance requirement's fulfilment is examined. In January, the vertical integral of the total energy shows large amounts of external heating over the North Pacific and Atlantic, together with cooling over most of the land area of the Northern Hemisphere. In July, an overall seasonal reversal is found. Both seasons are also characterized by strong energy flux divergence in the tropics, in association with the poleward transport of heat and momentum.

  15. Mariage des maillages: A new 3D general relativistic hydro code for simulation of gravitational waves from core-collapses.

    NASA Astrophysics Data System (ADS)

    Novak, Jerome; Dimmelmeier, Harrald; Font-Roda, Jose A.

    2004-12-01

    We present a new three-dimensional general relativistic hydrodynamics code which can be applied to study stellar core collapses and the resulting gravitational radiation. This code uses two different numerical techniques to solve partial differential equations arising in the model: high-resolution shock capturing (HRSC) schemes for the evolution of hydrodynamic quantities and spectral methods for the solution of Einstein equations. The equations are written and solved using spherical polar coordinates, best suited to stellar topology. Einstein equations are formulated within the 3+1 formalism and conformal flat condition (CFC) for the 3-metric and gravitational radiation is extracted using Newtonian quadrupole formulation.

  16. Multi-objective combined simulation-optimization of Lake Tana multi reservoir system, Ethiopia, using two different generalized reservoir system operation models

    NASA Astrophysics Data System (ADS)

    Müller, R.; Saliha, A. H.; Schütze, N.

    2012-04-01

    Finding optimal management strategies can be a challenging task when water resources systems serve multiple contrary goals. Reasonable trade offs among these goals has to be found. Multi-objective optimization (MOO) is able to obtain a so called Pareto front containing multiple trade off solutions (Pareto optimal solutions). An attractive and powerful MOO method is multi-objective combined simulation-optimization (MOCSO). Generally MOCSO model consists of mainly two components, a simulation model and a multi-objective optimization algorithm. Generalized reservoir system operation models (GRSOM) are commonly used as simulation models in water resources planning and management of multi-reservoir systems. The purpose of the GRSOM in MOCSO is to simulate a specific management in order to evaluate the objective functions for the multi-objective optimization algorithm. As the distribution of water in reservoir system is affected by the particular operation of the GRSOM model, the choice of the simulation model is a crucial step in MOCSO setup which may significantly affect the obtained results. In a case study of Lake Tana sub basin (Ethiopia) two MOCSO models are compared. The general reservoir operation simulation models HEC-5 and OASIS (Operational Analysis and Simulation of Integrated Systems) are combined with the Multi-Objective Covariance Matrix-Adaptation Evolution Strategy (MO-CMA-ES). HEC-5 is a pure simulation model which computes the distribution of water in the system sequentially and serially from upstream to downstream following an given algorithm. OASIS, a simulation-optimization model, incorporates a linear or nonlinear solver which distributes the water sequentially in the system according to objective function defined by the decision maker. Lake Tana is the largest fresh water lake in Ethiopia. Its water resources are controllable due to the Chara Chara weir. For hydropower production water is directly diverted from Lake Tana to Belles sub

  17. GOOSE, a generalized object-oriented simulation environment for developing and testing reactor models and control strategies

    SciTech Connect

    Ford, C.E.; March-Leuba, C. ); Guimaraes, L.; Ugolini, D. . Dept. of Nuclear Engineering)

    1991-01-01

    GOOSE, prototype software for a fully interactive, object-oriented simulation environment, is being developed as part of the Advanced Controls Program at Oak Ridge National Laboratory. Dynamic models may easily be constructed and tested; fully interactive capabilities allow the user to alter model parameters and complexity without recompilation. This environment provides access to powerful tools, such as numerical integration packages, graphical displays, and online help. Portability has bee an important design goal; the system was written in Objective-C in order to run on a wide variety of computers and operating systems, including UNIX workstations and personnel computers. A detailed library of nuclear reactor components, currently under development, will also be described. 5 refs., 4 figs.

  18. Hybrid MPI/OpenMP Implementation of the ORAC Molecular Dynamics Program for Generalized Ensemble and Fast Switching Alchemical Simulations.

    PubMed

    Procacci, Piero

    2016-06-27

    We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .

  19. Final report for "Development of generalized mapping tools to improve implementation of data driven computer simulations" (LDRD 04-ERD-083)

    SciTech Connect

    Pasyanos, M; Ramirez, A; Franz, G

    2005-02-04

    Probabilistic inverse techniques, like the Markov Chain Monte Carlo (MCMC) algorithm, have had recent success in combining disparate data types into a consistent model. The Stochastic Engine (SE) initiative was a technique that developed this method and applied it to a number of earth science and national security applications. For instance, while the method was originally developed to solve ground flow problems (Aines et al.), it has also been applied to atmospheric modeling and engineering problems. The investigators of this proposal have applied the SE to regional-scale lithospheric earth models, which have applications to hazard analysis and nuclear explosion monitoring. While this broad applicability is appealing, tailoring the method for each application is inefficient and time-consuming. Stochastic methods invert data by probabilistically sampling the model space and comparing observations predicted by the proposed model to observed data and preferentially accepting models that produce a good fit, generating a posterior distribution. In other words, the method ''inverts'' for a model or, more precisely, a distribution of models, by a series of forward calculations. While powerful, the technique is often challenging to implement, as the mapping from model space to data needs to be ''customized'' for each data type. For example, all proposed models might need to be transformed through sensitivity kernels from 3-D models to 2-D models in one step in order to compute path integrals, and transformed in a completely different manner in the next step. We seek technical enhancements that widen the applicability of the Stochastic Engine by generalizing some aspects of the method (i.e. model-to-data transformation types, configuration, model representation). Initially, we wish to generalize the transformations that are necessary to match the observations to proposed models. These transformations are sufficiently general not to pertain to any single application. This

  20. Coupling Planet Simulator Mars, a general circulation model of the Martian atmosphere, to the ice sheet model SICOPOLIS

    NASA Astrophysics Data System (ADS)

    Stenzel, O. J.; Grieger, B.; Keller, H. U.; Greve, R.; Fraedrich, K.; Kirk, E.; Lunkeit, F.

    2007-11-01

    A general circulation model of the Martian Atmosphere is coupled with a 3-dimensional polythermal ice-sheet model of the polar ice caps. With this combination a series of experiments is carried out to investigate the impact of long-term obliquity change on the Martian north polar ice cap (NPC). The behaviour of the NPC is tested under obliquities of θ=15∘, 25∘ and 35∘. With increasing obliquity the area covered by the NPC gets smaller but does not vanish. However, when started from an ice-free condition the models develop an ice cap only for low obliquities. The 'critical' obliquity at which a build-up of a new polar cap is possible is θ=22∘.

  1. Effects of surface current-wind interaction in an eddy-rich general ocean circulation simulation of the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Dietze, Heiner; Löptien, Ulrike

    2016-08-01

    Deoxygenation in the Baltic Sea endangers fish yields and favours noxious algal blooms. Yet, vertical transport processes ventilating the oxygen-deprived waters at depth and replenishing nutrient-deprived surface waters (thereby fuelling export of organic matter to depth) are not comprehensively understood. Here, we investigate the effects of the interaction between surface currents and winds on upwelling in an eddy-rich general ocean circulation model of the Baltic Sea. Contrary to expectations we find that accounting for current-wind effects inhibits the overall vertical exchange between oxygenated surface waters and oxygen-deprived water at depth. At major upwelling sites, however (e.g. off the southern coast of Sweden and Finland) the reverse holds: the interaction between topographically steered surface currents with winds blowing over the sea results in a climatological sea surface temperature cooling of 0.5 K. This implies that current-wind effects drive substantial local upwelling of cold and nutrient-replete waters.

  2. Estimating changes in temperature extremes from millennial-scale climate simulations using generalized extreme value (GEV) distributions

    NASA Astrophysics Data System (ADS)

    Huang, Whitney K.; Stein, Michael L.; McInerney, David J.; Sun, Shanshan; Moyer, Elisabeth J.

    2016-07-01

    Changes in extreme weather may produce some of the largest societal impacts of anthropogenic climate change. However, it is intrinsically difficult to estimate changes in extreme events from the short observational record. In this work we use millennial runs from the Community Climate System Model version 3 (CCSM3) in equilibrated pre-industrial and possible future (700 and 1400 ppm CO2) conditions to examine both how extremes change in this model and how well these changes can be estimated as a function of run length. We estimate changes to distributions of future temperature extremes (annual minima and annual maxima) in the contiguous United States by fitting generalized extreme value (GEV) distributions. Using 1000-year pre-industrial and future time series, we show that warm extremes largely change in accordance with mean shifts in the distribution of summertime temperatures. Cold extremes warm more than mean shifts in the distribution of wintertime temperatures, but changes in GEV location parameters are generally well explained by the combination of mean shifts and reduced wintertime temperature variability. For cold extremes at inland locations, return levels at long recurrence intervals show additional effects related to changes in the spread and shape of GEV distributions. We then examine uncertainties that result from using shorter model runs. In theory, the GEV distribution can allow prediction of infrequent events using time series shorter than the recurrence interval of those events. To investigate how well this approach works in practice, we estimate 20-, 50-, and 100-year extreme events using segments of varying lengths. We find that even using GEV distributions, time series of comparable or shorter length than the return period of interest can lead to very poor estimates. These results suggest caution when attempting to use short observational time series or model runs to infer infrequent extremes.

  3. Simulations of the Atmospheric General Circulation Using a Cloud-Resolving Model as a Superparameterization of Physical Processes.

    NASA Astrophysics Data System (ADS)

    Khairoutdinov, Marat; Randall, David; Demott, Charlotte

    2005-07-01

    Traditionally, the effects of clouds in GCMs have been represented by semiempirical parameterizations. Recently, a cloud-resolving model (CRM) was embedded into each grid column of a realistic GCM, the NCAR Community Atmosphere Model (CAM), to serve as a superparameterization (SP) of clouds. Results of the standard CAM and the SP-CAM are contrasted, both using T42 resolution (2.8° × 2.8° grid), 26 vertical levels, and up to a 500-day-long simulation. The SP was based on a two-dimensional (2D) CRM with 64 grid columns and 24 levels collocated with the 24 lowest levels of CAM. In terms of the mean state, the SP-CAM produces quite reasonable geographical distributions of precipitation, precipitable water, top-of-the-atmosphere radiative fluxes, cloud radiative forcing, and high-cloud fraction for both December-January-February and June-July-August. The most notable and persistent precipitation bias in the western Pacific, during the Northern Hemisphere summer of all the SP-CAM runs with 2D SP, seems to go away through the use of a small-domain three-dimensional (3D) SP with the same number of grid columns as the 2D SP, but arranged in an 8 × 8 square with identical horizontal resolution of 4 km. Two runs with the 3D SP have been carried out, with and without explicit large-scale momentum transport by convection. Interestingly, the double ITCZ feature seems to go away in the run that includes momentum transport.The SP improves the diurnal variability of nondrizzle precipitation frequency over the standard model by precipitating most frequently during late afternoon hours over the land, as observed, while the standard model maximizes its precipitation frequency around local solar noon. Over the ocean, both models precipitate most frequently in the early morning hours as observed. The SP model also reproduces the observed global distribution of the percentage of days with nondrizzle precipitation rather well. In contrast, the standard model tends to precipitate more

  4. Simulating organic species with the global atmospheric chemistry general circulation model ECHAM5/MESSy1: a comparison of model results with observations

    NASA Astrophysics Data System (ADS)

    Pozzer, A.; Jöckel, P.; Tost, H.; Sander, R.; Ganzeveld, L.; Kerkweg, A.; Lelieveld, J.

    2007-01-01

    The atmospheric-chemistry general circulation model ECHAM5/MESSy1 is evaluated with observations of different organic ozone precursors. This study continues a prior analysis which focused primarily on the representation of atmospheric dynamics and ozone. We use the results of the same reference simulation and apply a statistical analysis using data from numerous field campaigns. The results serve as a basis for future improvements of the model system. ECHAM5/MESSy1 generally reproduces the spatial distribution and the seasonal cycle of carbon monoxide (CO) very well. However, for the background in the northern hemisphere we obtain a negative bias (mainly due to an underestimation of emissions from fossil fuel combustion), and in the high latitude southern hemisphere a yet unexplained positive bias. The model results agree well with observations of alkanes, whereas severe problems in the simulation of alkenes are present. For oxygenated compounds the results are ambiguous: The model results are in good agreement with observations of formaldehyde, but systematic biases are present for methanol and acetone. The discrepancies between the model results and the observations are explained (partly) by means of sensitivity studies.

  5. Simulating organic species with the global atmospheric chemistry general circulation model ECHAM5/MESSy1: a comparison of model results with observations

    NASA Astrophysics Data System (ADS)

    Pozzer, A.; Jöckel, P.; Tost, H.; Sander, R.; Ganzeveld, L.; Kerkweg, A.; Lelieveld, J.

    2007-05-01

    The atmospheric-chemistry general circulation model ECHAM5/MESSy1 is evaluated with observations of different organic ozone precursors. This study continues a prior analysis which focused primarily on the representation of atmospheric dynamics and ozone. We use the results of the same reference simulation and apply a statistical analysis using data from numerous field campaigns. The results serve as a basis for future improvements of the model system. ECHAM5/MESSy1 generally reproduces the spatial distribution and the seasonal cycle of carbon monoxide (CO) very well. However, for the background in the Northern Hemisphere we obtain a negative bias (mainly due to an underestimation of emissions from fossil fuel combustion), and in the high latitude Southern Hemisphere a yet unexplained positive bias. The model results agree well with observations of alkanes, whereas severe problems in the simulation of alkenes and isoprene are present. For oxygenated compounds the results are ambiguous: The model results are in good agreement with observations of formaldehyde, but systematic biases are present for methanol and acetone. The discrepancies between the model results and the observations are explained (partly) by means of sensitivity studies.

  6. Molecular dynamics simulations and generalized Lenard-Balescu calculations of electron-ion temperature relaxation in plasmas

    NASA Astrophysics Data System (ADS)

    Benedict, Lorin X.; Surh, Michael P.; Khairallah, Saad A.; Castor, John I.; Whitley, Heather D.; Richards, David F.; Glosli, James N.; Murillo, Michael S.; Graziani, Frank R.

    2011-10-01

    We present classical molecular dynamics (MD) calculations of temperature relaxation in hydrogen, Ar-doped hydrogen, and SF6 plasmas in which the two-particle interactions are represented by statistical potentials of the Dunn-Broyles and modified Kelbg forms. Using a multi-species generalized Lenard-Balescu theory in which the full frequency and wave-vector dependent dielectric response is included, we show that deviations of our hydrogen MD results from the weak-coupling theories such as Landau-Spitzer are due in large part to the use of the statistical potentials which approximate, in a classical way, the effects of quantum diffraction. Classical MD with Kelbg potentials is shown to be better at reproducing intermediate-to-weak-coupling results of true quantum-Coulomb plasmas, but it is also shown that MD with both types of statistical potential yield the correct quantum result in the limit of infinitesimal plasma coupling. Effects of dynamical screening in multi-component plasmas are also discussed.

  7. Molecular dynamics simulations and generalized Lenard-Balescu calculations of electron-ion temperature equilibration in plasmas

    NASA Astrophysics Data System (ADS)

    Benedict, Lorin X.; Surh, Michael P.; Castor, John I.; Khairallah, Saad A.; Whitley, Heather D.; Richards, David F.; Glosli, James N.; Murillo, Michael S.; Scullard, Christian R.; Grabowski, Paul E.; Michta, David; Graziani, Frank R.

    2012-10-01

    We study the problem of electron-ion temperature equilibration in plasmas. We consider pure H at various densities and temperatures and Ar-doped H at temperatures high enough so that the Ar is fully ionized. Two theoretical approaches are used: classical molecular dynamics (MD) with statistical two-body potentials and a generalized Lenard-Balescu (GLB) theory capable of treating multicomponent weakly coupled plasmas. The GLB is used in two modes: (1) with the quantum dielectric response in the random-phase approximation (RPA) together with the pure Coulomb interaction and (2) with the classical (ℏ→0) dielectric response (both with and without local-field corrections) together with the statistical potentials. We find that the MD results are described very well by classical GLB including the statistical potentials and without local-field corrections (RPA only); worse agreement is found when static local-field effects are included, in contradiction to the classical pure-Coulomb case with like charges. The results of the various approaches are all in excellent agreement with pure-Coulomb quantum GLB when the temperature is high enough. In addition, we show that classical calculations with statistical potentials derived from the exact quantum two-body density matrix produce results in far better agreement with pure-Coulomb quantum GLB than classical calculations performed with older existing statistical potentials.

  8. The northern wintertime divergence extrema at 200 hPa and surface cyclones as simulated in the AMIP integration of the ECMWF general circulation model

    SciTech Connect

    Boyle, J.S.

    1994-11-01

    Divergence and convergence centers at 200 hPa and mean sea level pressure (MSLP) cyclones were located every 6 hr for a 10-yr general circulation model (GCM) simulation with the ECMWF (Cycle 36) for the boreal winters from 1980 to 1988. The simulation used the observed monthly mean sea surface temperature (SST) for the decade. Analysis of the frequency, location, and strength of these centers and cyclones gives insight into the dynamical response of the model to the varying SST. The results indicate that (1) the model produces reasonable climatologies of upper-level divergence and MSLP cyclones; (2) the model distribution of anomalies of divergence and convergence centers and MSLP cyclones is consistent with observations for the 1982-83 and 1986-87 El Nifio events; (3) the tropical Indian Ocean is the region of greatest divergence activity and interannual variability in the model; (4) the variability of the divergence centers is greater than that of the convergence centers; (5) strong divergence centers occur chiefly over the ocean in the midlatitudes but are more land-based in the tropics, except in the Indian Ocean; and (6) locations of divergence and convergence centers can be a useful tool for the intercomparison of global atmospheric simulations.

  9. Simulations of Hurricane Katrina (2005) with the 0.125 degree finite-volume General Circulation Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Reale, O.; Lin, S.-J.; Chern, J.-D.; Chang, J.; Henze, C.

    2006-01-01

    Hurricane Katrina was the sixth most intense hurricane in the Atlantic. Katrina's forecast poses major challenges, the most important of which is its rapid intensification. Hurricane intensity forecast with General Circulation Models (GCMs) is difficult because of their coarse resolution. In this article, six 5-day simulations with the ultra-high resolution finite-volume GCM are conducted on the NASA Columbia supercomputer to show the effects of increased resolution on the intensity predictions of Katrina. It is found that the 0.125 degree runs give comparable tracks to the 0.25 degree, but provide better intensity forecasts, bringing the center pressure much closer to observations with differences of only plus or minus 12 hPa. In the runs initialized at 1200 UTC 25 AUG, the 0.125 degree simulates a more realistic intensification rate and better near-eye wind distributions. Moreover, the first global 0.125 degree simulation without convection parameterization (CP) produces even better intensity evolution and near-eye winds than the control run with CP.

  10. Kelvin waves and ozone Kelvin waves in the quasi-biennial oscillation and semiannual oscillation: A simulation by a high-resolution chemistry-coupled general circulation model

    NASA Astrophysics Data System (ADS)

    Watanabe, Shingo; Takahashi, Masaaki

    2005-09-01

    Equatorial Kelvin waves and ozone Kelvin waves were simulated by a T63L250 chemistry-coupled general circulation model with a high vertical resolution (300 m). The model produces a realistic quasi-biennial oscillation (QBO) and a semiannual oscillation (SAO) in the equatorial stratosphere. The QBO has a period slightly longer than 2 years, and the SAO shows rapid reversals from westerly to easterly regimes and gradual descents of westerlies. Results for the zonal wave number 1 slow and fast Kelvin waves are discussed. Structure of the waves and phase relationships between temperature and ozone perturbations coincide well with satellite observations made by LIMS, CLAES, and MLS. They are generally in phase (antiphase) in the lower (upper) stratosphere as theoretically expected. The fast Kelvin waves in the temperature and ozone are dominant in the upper stratosphere because the slow Kelvin waves are effectively filtered by the QBO westerly. In this simulation, the fast Kelvin waves encounter their critical levels in the upper stratosphere when zonal asymmetry of the SAO westerly is enhanced by an intrusion of the extratropical planetary waves. In addition to the critical level filtering effect, modulations of wave properties by background winds are evident near easterly and westerly shears associated with the QBO and SAO. Enhancement of wave amplitude in the QBO westerly shear is well coincident with radiosonde observations. Increase/decrease of vertical wavelength in the QBO easterly/westerly is obvious in this simulation, which is consistent with the linear wave theory. Shortening of wave period due to the descending QBO westerly shear zone is demonstrated for the first time. Moreover, dominant periods during the QBO westerly phase are longer than those during the QBO easterly phase for both the slow and fast Kelvin waves.

  11. Simulating Mars' Dust Cycle with a Mars General Circulation Model: Effects of Water Ice Cloud Formation on Dust Lifting Strength and Seasonality

    NASA Technical Reports Server (NTRS)

    Kahre, Melinda A.; Haberle, Robert; Hollingsworth, Jeffery L.

    2012-01-01

    The dust cycle is critically important for the current climate of Mars. The radiative effects of dust impact the thermal and dynamical state of the atmosphere [1,2,3]. Although dust is present in the Martian atmosphere throughout the year, the level of dustiness varies with season. The atmosphere is generally the dustiest during northern fall and winter and the least dusty during northern spring and summer [4]. Dust particles are lifted into the atmosphere by dust storms that range in size from meters to thousands of kilometers across [5]. Regional storm activity is enhanced before northern winter solstice (Ls200 degrees - 240 degrees), and after northern solstice (Ls305 degrees - 340 degrees ), which produces elevated atmospheric dust loadings during these periods [5,6,7]. These pre- and post- solstice increases in dust loading are thought to be associated with transient eddy activity in the northern hemisphere with cross-equatorial transport of dust leading to enhanced dust lifting in the southern hemisphere [6]. Interactive dust cycle studies with Mars General Circulation Models (MGCMs) have included the lifting, transport, and sedimentation of radiatively active dust. Although the predicted global dust loadings from these simulations capture some aspects of the observed dust cycle, there are marked differences between the simulated and observed dust cycles [8,9,10]. Most notably, the maximum dust loading is robustly predicted by models to occur near northern winter solstice and is due to dust lifting associated with down slope flows on the flanks of the Hellas basin. Thus far, models have had difficulty simulating the observed pre- and post- solstice peaks in dust loading.

  12. A randomised trial deploying a simulation to investigate the impact of hospital discharge letters on patient care in general practice

    PubMed Central

    Jiwa, Moyez; Meng, Xingqiong; O'Shea, Carolyn; Magin, Parker; Dadich, Ann; Pillai, Vinita

    2014-01-01

    Objective To determine how the timing and length of hospital discharge letters impact on the number of ongoing patient problems identified by general practitioners (GPs). Trial design GPs were randomised into four groups. Each viewed a video monologue of an actor-patient as he might present to his GP following a hospital admission with 10 problems. GPs were provided with a medical record as well as a long or short discharge letter, which was available when the video was viewed or 1 week later. GPs indicated if they would prescribe, refer or order tests for the patient's problems. Methods Setting Primary care. Participants Practising Australian GPs. Intervention A short or long hospital discharge letter enumerating patient problems. Outcome measure Number of ongoing patient problems out of 10 identified for management by the GPs. Randomisation 1:1 randomisation. Blinding (masking) Single-blind. Results Numbers randomised 59 GPs. Recruitment GPs were recruited from a network of 102 GPs across Australia. Numbers analysed 59 GPs. Outcome GPs who received the long letter immediately were more satisfied with this information (p<0.001). Those who received the letter immediately identified significantly more health problems (p=0.001). GPs who received a short, delayed discharge letter were less satisfied than those who received a longer delayed letter (p=0.03); however, both groups who received the delayed letter identified a similar number of health problems. GPs who were older, who practised in an inner regional area or who offered more patient sessions per week identified fewer health problems (p values <0.01, <0.05 and <0.05, respectively). Harms Nil. Conclusions Receiving information during patient consultation, as well as GP characteristics, influences the number of patient problems addressed. Trial registration number ACTRN12614000403639. PMID:25005597

  13. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  14. Generalization of the paraxial trajectory method for the analysis of non-paraxial rays: simulation program G-optk for electron gun characterization.

    PubMed

    Fujita, Shin; Takebe, Masahiro; Ushio, Wataru; Shimoyama, Hiroshi

    2010-01-01

    The paraxial trajectory method has been generalized for the application to the cathode rays inside electron guns. The generalized method can handle rays that initially make a large angle with the optical axis with a satisfactory accuracy. The key to success of the generalization is the adoption of the trigonometric function sine for the trajectory slope specification, instead of the conventional use of the tangent. Formulas have been derived to relate the ray conditions (position and slope of the ray at reference planes) on the cathode to those at the crossover plane using third-order polynomial functions. Some of the polynomial coefficients can be used as the optical parameters in the characterization of electron sources; the electron gun focal length gives a quantitative estimate of both the crossover size and the angular current intensity. An electron gun simulation program G-optk has been developed based on the mathematical formulations presented in the article. The program calculates the principal paraxial trajectories and the relevant optical parameters from axial potentials and fields. It gives the electron-optical-column designers a clear physical picture of the electron gun in a much more faster way than the conventional ray-tracing methods.

  15. A mixed-contact formulation for a dynamics simulation of flexible systems: An integration with model-reduction techniques

    NASA Astrophysics Data System (ADS)

    Starc, Blaž; Čepon, Gregor; Boltežar, Miha

    2017-04-01

    A new numerical procedure for efficient dynamics simulations of linear-elastic systems with unilateral contacts is proposed. The method is based on the event-driven integration of a contact problem with a combination of single- and set-valued force laws together with classical model-reduction techniques. According to the contact state, the developed event-driven integration enables the formulation of reduced system matrices. Moreover, to enable the transition among different reduced spaces the formulation of the initial conditions is also presented. The method has been developed separately for each of the four most popular model-reduction techniques (Craig-Bampton, MacNeal, Rubin and dual Craig-Bampton). The applicability of the newly presented method is demonstrated on a simple clamped-beam structure with a unilateral contact, which is excited with a harmonic force at the free end.

  16. Flight simulation study to determine MLS lateral course width requirements on final approach for general aviation. [runway conditions affecting microwave landing systems

    NASA Technical Reports Server (NTRS)

    Crumrine, R. J.

    1976-01-01

    An investigation of the effects of various lateral course widths and runway lengths for manual CAT I Microwave Landing System instrument approaches was carried out with instrument rated pilots in a General Aviation simulator. Data are presented on the lateral dispersion at the touchdown zone, and the middle and outer markers, for approaches to 3,000, 8,000 (and trial 12,000 foot) runway lengths with full scale angular lateral course widths of + or - 1.19 deg, + or - 2.35 deg, and + or - 3.63 deg. The distance from touchdown where the localizer deviation went to full scale was also recorded. Pilot acceptance was measured according to the Cooper-Harper rating system.

  17. Parental reflective functioning is associated with tolerance of infant distress but not general distress: evidence for a specific relationship using a simulated baby paradigm.

    PubMed

    Rutherford, Helena J V; Goldberg, Benjamin; Luyten, Patrick; Bridgett, David J; Mayes, Linda C

    2013-12-01

    Parental reflective functioning represents the capacity of a parent to think about their own and their child's mental states and how these mental states may influence behavior. Here we examined whether this capacity as measured by the Parental Reflective Functioning Questionnaire relates to tolerance of infant distress by asking mothers (N = 21) to soothe a life-like baby simulator (BSIM) that was inconsolable, crying for a fixed time period unless the mother chose to stop the interaction. Increasing maternal interest and curiosity in their child's mental states, a key feature of parental reflective functioning, was associated with longer persistence times with the BSIM. Importantly, on a non-parent distress tolerance task, parental reflective functioning was not related to persistence times. These findings suggest that parental reflective functioning may be related to tolerance of infant distress, but not distress tolerance more generally, and thus may reflect specificity to persistence behaviors in parenting contexts.

  18. Mechanisms of Diurnal Precipitation over the United States Great Plains: A Cloud-Resolving Model Simulation

    NASA Technical Reports Server (NTRS)

    Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.

    2010-01-01

    The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.

  19. A New Simulation Technique for Study of Collisionless Shocks: Self-Adaptive Simulations

    SciTech Connect

    Karimabadi, H.; Omelchenko, Y.; Driscoll, J.; Krauss-Varban, D.; Fujimoto, R.; Perumalla, K.

    2005-08-01

    The traditional technique for simulating physical systems modeled by partial differential equations is by means of time-stepping methodology where the state of the system is updated at regular discrete time intervals. This method has inherent inefficiencies. In contrast to this methodology, we have developed a new asynchronous type of simulation based on a discrete-event-driven (as opposed to time-driven) approach, where the simulation state is updated on a 'need-to-be-done-only' basis. Here we report on this new technique, show an example of particle acceleration in a fast magnetosonic shockwave, and briefly discuss additional issues that we are addressing concerning algorithm development and parallel execution.

  20. A generalized crystal-cutting method for modeling arbitrarily oriented crystals in 3D periodic simulation cells with applications to crystal-crystal interfaces

    NASA Astrophysics Data System (ADS)

    Kroonblawd, Matthew P.; Mathew, Nithin; Jiang, Shan; Sewell, Thomas D.

    2016-10-01

    A Generalized Crystal-Cutting Method (GCCM) is developed that automates construction of three-dimensionally periodic simulation cells containing arbitrarily oriented single crystals and thin films, two-dimensionally (2D) infinite crystal-crystal homophase and heterophase interfaces, and nanostructures with intrinsic N-fold interfaces. The GCCM is based on a simple mathematical formalism that facilitates easy definition of constraints on cut crystal geometries. The method preserves the translational symmetry of all Bravais lattices and thus can be applied to any crystal described by such a lattice including complicated, low-symmetry molecular crystals. Implementations are presented with carefully articulated combinations of loop searches and constraints that drastically reduce computational complexity compared to simple loop searches. Orthorhombic representations of monoclinic and triclinic crystals found using the GCCM overcome some limitations in standard distributions of popular molecular dynamics software packages. Stability of grain boundaries in β-HMX was investigated using molecular dynamics and molecular statics simulations with 2D infinite crystal-crystal homophase interfaces created using the GCCM. The order of stabilities for the four grain boundaries studied is predicted to correlate with the relative prominence of particular crystal faces in lab-grown β-HMX crystals. We demonstrate how nanostructures can be constructed through simple constraints applied in the GCCM framework. Example GCCM constructions are shown that are relevant to some current problems in materials science, including shock sensitivity of explosives, layered electronic devices, and pharmaceuticals.

  1. A thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (time-GCM): Equinox solar cycle minimum simulations (30-500 km)

    SciTech Connect

    Roble, R.G.; Ridley, E.C.

    1994-03-15

    A new simulation model of the mesosphere, thermosphere, and ionosphere with coupled electrodynamics has been developed and used to calculate the global circulation, temperature and compositional structure between 30-500 km for equinox, solar cycle minimum, geomagnetic quiet conditions. The model incorporates all of the features of the NCAR thermosphere-ionosphere-electrodynamics general circulation model (TIE-GCM) but the lower boundary has been extended downward from 97 to 30 km (10 mb) and it includes the physical and chemical processes appropriate for the mesosphere and upper stratosphere. The first simulation used Rayleigh friction to represent gravity wave drag in the middle atmosphere and although it was able to close the mesospheric jets it severely damped the diurnal tide. Reduced Rayleigh friction allowed the tide to penetrate to thermospheric heights but did not close the jets. A gravity wave parameterization developed by Fritts and Lu allows both features to exist simultaneously with the structure of tides and mean flow dependent upon the strength of the gravity wave source. The model calculates a changing dynamic structure with the mean flow and diurnal tide dominant in the mesosphere, the in-situ generated semi-diurnal tide dominating the lower thermosphere and an in-situ generated diurnal tide in the upper thermosphere. The results also show considerable interaction between dynamics and composition, especially atomic oxygen between 85 and 120 km. 31 refs., 3 figs.

  2. Delimiting Species Using Single-Locus Data and the Generalized Mixed Yule Coalescent Approach: A Revised Method and Evaluation on Simulated Data Sets

    PubMed Central

    Fujisawa, Tomochika; Barraclough, Timothy G.

    2013-01-01

    DNA barcoding-type studies assemble single-locus data from large samples of individuals and species, and have provided new kinds of data for evolutionary surveys of diversity. An important goal of many such studies is to delimit evolutionarily significant species units, especially in biodiversity surveys from environmental DNA samples. The Generalized Mixed Yule Coalescent (GMYC) method is a likelihood method for delimiting species by fitting within- and between-species branching models to reconstructed gene trees. Although the method has been widely used, it has not previously been described in detail or evaluated fully against simulations of alternative scenarios of true patterns of population variation and divergence between species. Here, we present important reformulations to the GMYC method as originally specified, and demonstrate its robustness to a range of departures from its simplifying assumptions. The main factor affecting the accuracy of delimitation is the mean population size of species relative to divergence times between them. Other departures from the model assumptions, such as varying population sizes among species, alternative scenarios for speciation and extinction, and population growth or subdivision within species, have relatively smaller effects. Our simulations demonstrate that support measures derived from the likelihood function provide a robust indication of when the model performs well and when it leads to inaccurate delimitations. Finally, the so-called single-threshold version of the method outperforms the multiple-threshold version of the method on simulated data: we argue that this might represent a fundamental limit due to the nature of evidence used to delimit species in this approach. Together with other studies comparing its performance relative to other methods, our findings support the robustness of GMYC as a tool for delimiting species when only single-locus information is available. [Clusters; coalescent; DNA; genealogical

  3. A simulation analysis of an extension of one-dimensional speckle correlation method for detection of general in-plane translation.

    PubMed

    Hamarová, Ivana; Smíd, Petr; Horváth, Pavel; Hrabovský, Miroslav

    2014-01-01

    The purpose of the study is to show a proposal of an extension of a one-dimensional speckle correlation method, which is primarily intended for determination of one-dimensional object's translation, for detection of general in-plane object's translation. In that view, a numerical simulation of a displacement of the speckle field as a consequence of general in-plane object's translation is presented. The translation components a x and a y representing the projections of a vector a of the object's displacement onto both x- and y-axes in the object plane (x, y) are evaluated separately by means of the extended one-dimensional speckle correlation method. Moreover, one can perform a distinct optimization of the method by reduction of intensity values representing detected speckle patterns. The theoretical relations between the translation components a x and a y of the object and the displacement of the speckle pattern for selected geometrical arrangement are mentioned and used for the testifying of the proposed method's rightness.

  4. A Simulation Analysis of an Extension of One-Dimensional Speckle Correlation Method for Detection of General In-Plane Translation

    PubMed Central

    Hrabovský, Miroslav

    2014-01-01

    The purpose of the study is to show a proposal of an extension of a one-dimensional speckle correlation method, which is primarily intended for determination of one-dimensional object's translation, for detection of general in-plane object's translation. In that view, a numerical simulation of a displacement of the speckle field as a consequence of general in-plane object's translation is presented. The translation components ax and ay representing the projections of a vector a of the object's displacement onto both x- and y-axes in the object plane (x, y) are evaluated separately by means of the extended one-dimensional speckle correlation method. Moreover, one can perform a distinct optimization of the method by reduction of intensity values representing detected speckle patterns. The theoretical relations between the translation components ax and ay of the object and the displacement of the speckle pattern for selected geometrical arrangement are mentioned and used for the testifying of the proposed method's rightness. PMID:24592180

  5. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    SciTech Connect

    Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  6. Highly-Autonomous Event-Driven Spacecraft Control

    NASA Technical Reports Server (NTRS)

    Aljabri, A. S.; Kia, T.; Lai, J. Y.

    1994-01-01

    Future JPL missions will continue to be scientifically and technically more ambitious, and will demand more autonomy to accomplish complex tasks in uncertain environments and in close proximity to extraterrestrial surfaces. A prime example is small body rendezvous and sample return.

  7. Schedule or Event Driven? How Do I Know?

    DTIC Science & Technology

    2014-04-01

    inviolable: “Think of it like a NASA planetary probe that has to rendezvous with the planet in 2017; if you don’t make that date you have to wait another...misunderstandings between the gov - ernment and contractor concerning exactly what the added content entails. In such cases, the schedule consequences

  8. General mechanism and dynamics of the solar wind interaction with lunar magnetic anomalies from 3-D particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Deca, Jan; Divin, Andrey; Lembège, Bertrand; Horányi, Mihály; Markidis, Stefano; Lapenta, Giovanni

    2015-08-01

    We present a general model of the solar wind interaction with a dipolar lunar crustal magnetic anomaly (LMA) using three-dimensional full-kinetic and electromagnetic simulations. We confirm that LMAs may indeed be strong enough to stand off the solar wind from directly impacting the lunar surface, forming a so-called "minimagnetosphere," as suggested by spacecraft observations and theory. We show that the LMA configuration is driven by electron motion because its scale size is small with respect to the gyroradius of the solar wind ions. We identify a population of back-streaming ions, the deflection of magnetized electrons via the E × B drift motion, and the subsequent formation of a halo region of elevated density around the dipole source. Finally, it is shown that the presence and efficiency of the processes are heavily impacted by the upstream plasma conditions and, on their turn, influence the overall structure and evolution of the LMA system. Understanding the detailed physics of the solar wind interaction with LMAs, including magnetic shielding, particle dynamics and surface charging is vital to evaluate its implications for lunar exploration.

  9. Implementation of Sub-Cooling of Cryogenic Propellants by Injection of Non-condensing Gas to the Generalized Fluid Systems Simulation Program (GFSSP)

    NASA Technical Reports Server (NTRS)

    Huggett, Daniel J.; Majumdar, Alok

    2013-01-01

    Cryogenic propellants are readily heated when used. This poses a problem for rocket engine efficiency and effective boot-strapping of the engine, as seen in the "hot" LOX (Liquid Oxygen) problem on the S-1 stage of the Saturn vehicle. In order to remedy this issue, cryogenic fluids were found to be sub-cooled by injection of a warm non-condensing gas. Experimental results show that the mechanism behind the sub-cooling is evaporative cooling. It has been shown that a sub-cooled temperature difference of approximately 13 deg F below saturation temperature [1]. The phenomenon of sub-cooling of cryogenic propellants by a non-condensing gas is not readily available with the General Fluid System Simulation Program (GFSSP) [2]. GFSSP is a thermal-fluid program used to analyze a wide variety of systems that are directly impacted by thermodynamics and fluid mechanics. In order to model this phenomenon, additional capabilities had to be added to GFSSP in the form of a FORTRAN coded sub-routine to calculate the temperature of the sub-cooled fluid. Once this was accomplished, the sub-routine was implemented to a GFSSP model that was created to replicate an experiment that was conducted to validate the GFSSP results.

  10. Monte Carlo simulation of the kinetics of decomposition and the formation of precipitates at grain boundaries of the general type in dilute BCC Fe-Cu alloys

    NASA Astrophysics Data System (ADS)

    Kar'kin, I. N.; Kar'kina, L. E.; Korzhavyi, P. A.; Gornostyrev, Yu. N.

    2017-01-01

    The kinetics of decomposition of a polycrystalline Fe-Cu alloy and the formation of precipitates at the grain boundaries of the material have been investigated theoretically using the atomistic simulation on different time scales by (i) the Monte Carlo method implementing the diffusion redistribution of Cu atoms and (ii) the molecular dynamics method providing the atomic relaxation of the crystal lattice. It has been shown that, for a small grain size ( D 10 nm), the decomposition in the bulk of the grain is suppressed, whereas the copper-enriched precipitates coherently bound to the matrix are predominantly formed at the grain boundaries of the material. The size and composition of the precipitates depend significantly on the type of grain boundaries: small precipitates (1.2-1.4 nm) have the average composition of Fe-40 at % Cu and arise in the vicinity of low-angle grain boundaries, while larger precipitates that have sizes of up to 4 nm and the average composition of Fe-60 at % Cu are formed near grain boundaries of the general type and triple junctions.

  11. Subsurface Transport Over Reactive Multiphases (STORM): A general, coupled, nonisothermal multiphase flow, reactive transport, and porous medium alteration simulator, Version 2 user's guide

    SciTech Connect

    DH Bacon; MD White; BP McGrail

    2000-03-07

    The Hanford Site, in southeastern Washington State, has been used extensively to produce nuclear materials for the US strategic defense arsenal by the Department of Energy (DOE) and its predecessors, the US Atomic Energy Commission and the US Energy Research and Development Administration. A large inventory of radioactive and mixed waste has accumulated in 177 buried single- and double shell tanks. Liquid waste recovered from the tanks will be pretreated to separate the low-activity fraction from the high-level and transuranic wastes. Vitrification is the leading option for immobilization of these wastes, expected to produce approximately 550,000 metric tons of Low Activity Waste (LAW) glass. This total tonnage, based on nominal Na{sub 2}O oxide loading of 20% by weight, is destined for disposal in a near-surface facility. Before disposal of the immobilized waste can proceed, the DOE must approve a performance assessment, a document that described the impacts, if any, of the disposal facility on public health and environmental resources. Studies have shown that release rates of radionuclides from the glass waste form by reaction with water determine the impacts of the disposal action more than any other independent parameter. This report describes the latest accomplishments in the development of a computational tool, Subsurface Transport Over Reactive Multiphases (STORM), Version 2, a general, coupled non-isothermal multiphase flow and reactive transport simulator. The underlying mathematics in STORM describe the rate of change of the solute concentrations of pore water in a variably saturated, non-isothermal porous medium, and the alteration of waste forms, packaging materials, backfill, and host rocks.

  12. Using the Flow-3D General Moving Object Model to Simulate Coupled Liquid Slosh - Container Dynamics on the SPHERES Slosh Experiment: Aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Schulman, Richard; Kirk, Daniel; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul

    2013-01-01

    The SPHERES Slosh Experiment (SSE) is a free floating experimental platform developed for the acquisition of long duration liquid slosh data aboard the International Space Station (ISS). The data sets collected will be used to benchmark numerical models to aid in the design of rocket and spacecraft propulsion systems. Utilizing two SPHERES Satellites, the experiment will be moved through different maneuvers designed to induce liquid slosh in the experiment's internal tank. The SSE has a total of twenty-four thrusters to move the experiment. In order to design slosh generating maneuvers, a parametric study with three maneuvers types was conducted using the General Moving Object (GMO) model in Flow-30. The three types of maneuvers are a translation maneuver, a rotation maneuver and a combined rotation translation maneuver. The effectiveness of each maneuver to generate slosh is determined by the deviation of the experiment's trajectory as compared to a dry mass trajectory. To fully capture the effect of liquid re-distribution on experiment trajectory, each thruster is modeled as an independent force point in the Flow-3D simulation. This is accomplished by modifying the total number of independent forces in the GMO model from the standard five to twenty-four. Results demonstrate that the most effective slosh generating maneuvers for all motions occurs when SSE thrusters are producing the highest changes in SSE acceleration. The results also demonstrate that several centimeters of trajectory deviation between the dry and slosh cases occur during the maneuvers; while these deviations seem small, they are measureable by SSE instrumentation.

  13. Statistical downscaling of general-circulation-model- simulated average monthly air temperature to the beginning of flowering of the dandelion (Taraxacum officinale) in Slovenia

    NASA Astrophysics Data System (ADS)

    Bergant, Klemen; Kajfež-Bogataj, Lučka; Črepinšek, Zalika

    2002-02-01

    Phenological observations are a valuable source of information for investigating the relationship between climate variation and plant development. Potential climate change in the future will shift the occurrence of phenological phases. Information about future climate conditions is needed in order to estimate this shift. General circulation models (GCM) provide the best information about future climate change. They are able to simulate reliably the most important mean features on a large scale, but they fail on a regional scale because of their low spatial resolution. A common approach to bridging the scale gap is statistical downscaling, which was used to relate the beginning of flowering of Taraxacum officinale in Slovenia with the monthly mean near-surface air temperature for January, February and March in Central Europe. Statistical models were developed and tested with NCAR/NCEP Reanalysis predictor data and EARS predictand data for the period 1960-1999. Prior to developing statistical models, empirical orthogonal function (EOF) analysis was employed on the predictor data. Multiple linear regression was used to relate the beginning of flowering with expansion coefficients of the first three EOF for the Janauary, Febrauary and March air temperatures, and a strong correlation was found between them. Developed statistical models were employed on the results of two GCM (HadCM3 and ECHAM4/OPYC3) to estimate the potential shifts in the beginning of flowering for the periods 1990-2019 and 2020-2049 in comparison with the period 1960-1989. The HadCM3 model predicts, on average, 4 days earlier occurrence and ECHAM4/OPYC3 5 days earlier occurrence of flowering in the period 1990-2019. The analogous results for the period 2020-2049 are a 10- and 11-day earlier occurrence.

  14. Global distribution of gravity wave fields and their seasonal dependence in the Martian atmosphere simulated in a high-resolution general circulation model

    NASA Astrophysics Data System (ADS)

    Kuroda, Takeshi; Medvedev, Alexander; Yiğit, Erdal; Hartogh, Paul

    2016-10-01

    Gravity waves (GWs) are small-scale atmospheric waves generated by various geophysical processes, such as topography, convection, and dynamical instability. On Mars, several observations and simulations have revealed that GWs strongly affect temperature and wind fields in the middle and upper atmosphere. We have worked with a high-resolution Martian general circulation model (MGCM), with the spectral resolution of T106 (horizontal grid interval of ~67 km), for the investigations of generation and propagation of GWs. We analyzed for three kinds of wavelength ranges, (1) horizontal total wavenumber s=21-30 (wavelength λ~700-1000 km), (2) s=31-60 (λ~350-700 km), and (3) s=61-106 (λ~200-350 km). Our results show that shorter-scale harmonics progressively dominate with height during both equinox and solstice. We have detected two main sources of GWs: mountainous regions and the meandering winter polar jet. In both seasons GW energy in the troposphere due to the shorter-scale harmonics is concentrated in the low latitudes in a good agreement with observations. Orographically-generated GWs contribute significantly to the total energy of disturbances, and strongly decay with height. Thus, the non-orographic GWs of tropospheric origin dominate near the mesopause. The vertical fluxes of wave horizontal momentum are directed mainly against the larger-scale wind. Mean magnitudes of the drag in the middle atmosphere are tens of m s-1 sol-1, while instantaneously they can reach thousands of m s-1 sol-1, which results in an attenuation of the wind jets in the middle atmosphere and in tendency of their reversal.

  15. General regression neural network and Monte Carlo simulation model for survival and growth of Salmonella on raw chicken skin as a function of serotype, temperature and time for use in risk assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A general regression neural network and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, Hadar), temperature (5 to 50C) and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural r...

  16. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces

    NASA Astrophysics Data System (ADS)

    Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc

    2016-02-01

    The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.

  17. Why an SO/sub 2/ emission tax is an unpopular policy instrument: Simulation results from a general equilibrium model of the Norwegian economy

    SciTech Connect

    Hanson, D.A.; Alfsen, K.H.

    1986-01-01

    Norway, together with some twenty other countries, signed the Helsinki treaty in July 1985 for the purpose of reducing SO/sub 2/ emissions. Hence, it is interesting to analyze the emission reductions that could be achieved using a tax on SO/sub 2/ emissions, as well as the indirect impacts on the economy. Simulations of the economic impact of the tax (which effectively increases the cost of using energy) were made using the Multi-Sectoral Growth (MSG) model. Results of the simulations indicated a larger than expected reduction in economic output.

  18. Understanding the past to interpret the future: comparison of simulated groundwater recharge in the upper Colorado River basin (USA) using observed and general-circulation-model historical climate data

    NASA Astrophysics Data System (ADS)

    Tillman, Fred D.; Gangopadhyay, Subhrendu; Pruitt, Tom

    2017-03-01

    In evaluating potential impacts of climate change on water resources, water managers seek to understand how future conditions may differ from the recent past. Studies of climate impacts on groundwater recharge often compare simulated recharge from future and historical time periods on an average monthly or overall average annual basis, or compare average recharge from future decades to that from a single recent decade. Baseline historical recharge estimates, which are compared with future conditions, are often from simulations using observed historical climate data. Comparison of average monthly results, average annual results, or even averaging over selected historical decades, may mask the true variability in historical results and lead to misinterpretation of future conditions. Comparison of future recharge results simulated using general circulation model (GCM) climate data to recharge results simulated using actual historical climate data may also result in an incomplete understanding of the likelihood of future changes. In this study, groundwater recharge is estimated in the upper Colorado River basin, USA, using a distributed-parameter soil-water balance groundwater recharge model for the period 1951-2010. Recharge simulations are performed using precipitation, maximum temperature, and minimum temperature data from observed climate data and from 97 CMIP5 (Coupled Model Intercomparison Project, phase 5) projections. Results indicate that average monthly and average annual simulated recharge are similar using observed and GCM climate data. However, 10-year moving-average recharge results show substantial differences between observed and simulated climate data, particularly during period 1970-2000, with much greater variability seen for results using observed climate data.

  19. Optimization of energy usage in textile finishing operations. Part I. The simulation of batch dyehouse activities with a general purpose computer model

    SciTech Connect

    Beard, J.N. Jr.; Rice, W.T. Jr.

    1980-01-01

    A project to develop a mathematical model capable of simulating the activities in a typical batch dyeing process in the textile industry is described. The model could be used to study the effects of changes in dye-house operations, and to determine effective guidelines for optimal dyehouse performance. The computer model is of a hypothetical dyehouse. The appendices contain a listing of the computer program, sample computer inputs and outputs, and instructions for using the model. (MCW)

  20. The RD53 collaboration's SystemVerilog-UVM simulation framework and its general applicability to design of advanced pixel readout chips

    NASA Astrophysics Data System (ADS)

    Marconi, S.; Conti, E.; Placidi, P.; Christiansen, J.; Hemperek, T.

    2014-10-01

    The foreseen Phase 2 pixel upgrades at the LHC have very challenging requirements for the design of hybrid pixel readout chips. A versatile pixel simulation platform is as an essential development tool for the design, verification and optimization of both the system architecture and the pixel chip building blocks (Intellectual Properties, IPs). This work is focused on the implemented simulation and verification environment named VEPIX53, built using the SystemVerilog language and the Universal Verification Methodology (UVM) class library in the framework of the RD53 Collaboration. The environment supports pixel chips at different levels of description: its reusable components feature the generation of different classes of parameterized input hits to the pixel matrix, monitoring of pixel chip inputs and outputs, conformity checks between predicted and actual outputs and collection of statistics on system performance. The environment has been tested performing a study of shared architectures of the trigger latency buffering section of pixel chips. A fully shared architecture and a distributed one have been described at behavioral level and simulated; the resulting memory occupancy statistics and hit loss rates have subsequently been compared.

  1. The northern wintertime divergence extrema at 200 hPa and MSLP cyclones as simulated in the AMIP integration by the ECMWF general circulation model

    SciTech Connect

    Boyle, J.S. )

    1994-01-01

    Divergence and convergence centers at 200 hPa and mean sea level pressure (MSLP) cyclones are located every 6 hours for a 10-year GCM simulation for the boreal winters from 1980 to 1988. The simulation used the observed monthly mean SST for the decade. Analysis of the frequency, locations, and strengths of these centers and cyclones give insight into the dynamical response of the model to the varying SST. IT is found that (1) the model produces reasonable climatologies of upper-level divergence and MSLP cyclones. (2) The model distribution of anomalies of divergence/convergence centers and MSLP cyclones is consistent with available observations for the 1982-83 and 2986-87 El Nino events. (3) The tropical Indian Ocean is the region of greatest divergence activity and interannual variability in the model. (4) The variability of the divergence centers is greater than that of the convergence centers. (5) Strong divergence centers are chiefly oceanic events in the midlatitudes but are more land based in the tropics, except in the Indian. (6) Locations of divergence/convergence centers can be a useful tool for the intercomparison of global atmospheric simulations.

  2. General Anesthesia

    MedlinePlus

    General anesthesia Overview By Mayo Clinic Staff Under general anesthesia, you are completely unconscious and unable to feel pain during medical procedures. General anesthesia usually uses a combination of intravenous drugs and ...

  3. A three-dimensional chemistry/general circulation model simulation of anthropogenically derived ozone in the troposphere and its radiative climate forcing

    NASA Astrophysics Data System (ADS)

    Roelofs, Geert-Jan; Lelieveld, Jos; van Dorland, Rob

    1997-10-01

    We present results from the tropospheric chemistry/climate European Center Hamburg Model by comparing two simulations that consider a preindustrial and a contemporary emission scenario. Photochemical O3 production from anthropogenically emitted precursors contributes about 30% to the present-day tropospheric O3 content, which is roughly equal to the natural photochemical production. Transports of stratospheric O3 into the troposphere contribute about 40%. As a result of anthropogenic emissions, the O3 maximum over remote northern hemisphere (NH) areas has shifted from winter to spring, when photochemical production of O3 is relatively efficient. Over NH continents the preindustrial seasonal variability is relatively weak whereas a distinct surface O3 summer maximum appears in the contemporary simulation. In the (sub)tropical southern hemisphere (SH), anthropogenic biomass burning emissions cause an increase of O3 mixing ratios in the dry season (September-November). We calculate a relative increase in O3 mixing ratios due to anthropogenic emissions of about 30% in the pristine SH middle and high latitudes to about 100% in the polluted NH boundary layer. The model simulations suggest that the absolute increase of tropospheric O3 maximizes in the middle troposphere. Through convection, upper tropospheric O3 mixing ratios are significantly affected in the tropical regions and, during summer, in the middle and high NH latitudes. Under these conditions the radiative forcing of climate by increasing O3 is relatively large. We calculate a global and annual average radiative forcing by tropospheric O3 perturbations of 0.42 W m-2, i.e., 0.51 W m-2 in the NH and 0.33 W m-2 in the SH.

  4. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We

  5. Conserving the linear momentum in stochastic dynamics: Dissipative particle dynamics as a general strategy to achieve local thermostatization in molecular dynamics simulations.

    PubMed

    Passler, Peter P; Hofer, Thomas S

    2017-02-15

    Stochastic dynamics is a widely employed strategy to achieve local thermostatization in molecular dynamics simulation studies; however, it suffers from an inherent violation of momentum conservation. Although this short-coming has little impact on structural and short-time dynamic properties, it can be shown that dynamics in the long-time limit such as diffusion is strongly dependent on the respective thermostat setting. Application of the methodically similar dissipative particle dynamics (DPD) provides a simple, effective strategy to ensure the advantages of local, stochastic thermostatization while at the same time the linear momentum of the system remains conserved. In this work, the key parameters to employ the DPD thermostats in the framework of periodic boundary conditions are investigated, in particular the dependence of the system properties on the size of the DPD-region as well as the treatment of forces near the cutoff. Structural and dynamical data for light and heavy water as well as a Lennard-Jones fluid have been compared to simulations executed via stochastic dynamics as well as via use of the widely employed Nose-Hoover chain and Berendsen thermostats. It is demonstrated that a small size of the DPD region is sufficient to achieve local thermalization, while at the same time artifacts in the self-diffusion characteristic for stochastic dynamics are eliminated. © 2016 Wiley Periodicals, Inc.

  6. Simulation study of the effect of golden-angle KWIC with generalized kinetic model analysis on diagnostic accuracy for lesion discrimination

    PubMed Central

    Freed, Melanie; Kim, Sungheon G.

    2014-01-01

    Purpose To quantitatively evaluate temporal blurring of dynamic contrast-enhanced MRI data generated using a k-space weighted image contrast (KWIC) image reconstruction technique with golden-angle view-ordering. Methods K-space data were simulated using golden-angle view-ordering and reconstructed using a KWIC algorithm with a Fibonacci number of views enforced for each annulus in k-space. Temporal blurring was evaluated by comparing pharmacokinetic model parameters estimated from the simulated data with the true values. Diagnostic accuracy was quantified using receiver operator characteristic curves (ROC) and the area under the ROC curves (AUC). Results Estimation errors of pharmacokinetic model parameters were dependent on the true curve type and the lesion size. For 10 mm benign and malignant lesions, estimated AUC values using the true and estimate AIFs were consistent with the true AUC value. For 5 mm benign and 20 mm malignant lesions, estimated AUC values using the true and estimated AIFs were 0.906±0.020 and 0.905±0.021, respectively, as compared with the true AUC value of 0.896. Conclusions Although the investigated reconstruction algorithm does impose errors in pharmacokinetic model parameter estimation, they are not expected to significantly impact clinical studies of diagnostic accuracy. PMID:25267703

  7. Generic Methodology for Verification and Validation (GM-VV) to Support Acceptance of Models, Simulations and Data (Methodologie generale de verification et de validation (GM-VV) visant a soutenir l acceptation des modeles, simulations et donnees)

    DTIC Science & Technology

    2015-01-01

    High Level Architecture IEEE Institute of Electrical and Electronics Engineers ISO International Organization for Standardization ISO /IEC ISO ...international standards and other related practices. The conceptual framework provides the terminology , concepts and principles to facilitate communication and...framework provides fundamental and general applicable terminology , semantics, concepts and principles for V&V. The purpose of the framework is to

  8. A Steady State and Quasi-Steady Interface Between the Generalized Fluid System Simulation Program and the SINDA/G Thermal Analysis Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Majumdar, Alok; Tiller, Bruce

    2001-01-01

    A general purpose, one dimensional fluid flow code is currently being interfaced with the thermal analysis program SINDA/G. The flow code, GFSSP, is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development is conducted in multiple phases. This paper describes the first phase of the interface which allows for steady and quasisteady (unsteady solid, steady fluid) conjugate heat transfer modeling.

  9. A DNA sequence evolution analysis generalized by simulation and the markov chain monte carlo method implicates strand slippage in a majority of insertions and deletions.

    PubMed

    Nishizawa, Manami; Nishizawa, Kazuhisa

    2002-12-01

    To study the mechanisms for local evolutionary changes in DNA sequences involving slippage-type insertions and deletions, an alignment approach is explored that can consider the posterior probabilities of alignment models. Various patterns of insertion and deletion that can link the ancestor and descendant sequences are proposed and evaluated by simulation and compared by the Markov chain Monte Carlo (MCMC) method. Analyses of pseudogenes reveal that the introduction of the parameters that control the probability of slippage-type events markedly augments the probability of the observed sequence evolution, arguing that a cryptic involvement of slippage occurrences is manifested as insertions and deletions of short nucleotide segments. Strikingly, approximately 80% of insertions in human pseudogenes and approximately 50% of insertions in murids pseudogenes are likely to be caused by the slippage-mediated process, as represented by BC in ABCD --> ABCBCD. We suggest that, in both human and murids, even very short repetitive motifs, such as CAGCAG, CACACA, and CCCC, have approximately 10- to 15-fold susceptibility to insertions and deletions, compared to nonrepetitive sequences. Our protocol, namely, indel-MCMC, thus seems to be a reasonable approach for statistical analyses of the early phase of microsatellite evolution.

  10. General anesthesia

    MedlinePlus

    ... page: //medlineplus.gov/ency/article/007410.htm General anesthesia To use the sharing features on this page, please enable JavaScript. General anesthesia is treatment with certain medicines that puts you ...

  11. Molecular Simulation Of Nonequilibrium Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Schwartzentruber, T. E.; Valentini, P.; Tump, P.

    2011-05-01

    Large-scale conventional time-driven molecular dynamics (MD) simulations of normal shock waves are performed for monatomic argon and argon-helium mixtures. For pure argon, near perfect agreement between MD and direct simulation Monte Carlo (DSMC) results using the variable-hard-sphere model are found for density and temperature profiles as well as for velocity distribution functions throughout the shock. MD simulation results for argon are also in excellent agreement with experimental shock thickness data. Preliminary MD simulation results for argon-helium mixtures are in qualitative agreement with experimental density and temperature profile data, where separation between argon and helium density profiles due to disparate atomic mass is observed. Since conventional time-driven MD simulation of dilute gases is computationally inefficient, a combined Event-Driven/Time-Driven MD algorithm is presented. The ED/TD-MD algorithm computes impending collisions and advances molecules directly to their next collision while evaluating the collision using conventional time-driven MD with an arbitrary interatomic potential. The method timestep thus approaches the mean-collision- time in the gas, while also detecting and simulating multi-body collisions with a small approximation. Extension of the method to diatomic and small polyatomic molecules is detailed, where center-of-mass velocities and extended cutoff radii are used to advance molecules to impend- ing collisions. Only atomic positions are integrated during collisions and molecule sorting algorithms are em- ployed to determine if atoms are bound in a molecule after a collision event. Rotational relaxation to equilibrium for a low density diatomic gas is validated by comparison with large-scale conventional time-driven MD simulation, where the final rotational distribution function is verified to be the correct Boltzmann rotational energy distribution.

  12. Numerical simulation on slabs dislocation of Zipingpu concrete faced rockfill dam during the Wenchuan earthquake based on a generalized plasticity model.

    PubMed

    Xu, Bin; Zhou, Yang; Zou, Degao

    2014-01-01

    After the Wenchuan earthquake in 2008, the Zipingpu concrete faced rockfill dam (CFRD) was found slabs dislocation between different stages slabs and the maximum value reached 17 cm. This is a new damage pattern and did not occur in previous seismic damage investigation. Slabs dislocation will affect the seepage control system of the CFRD gravely and even the safety of the dam. Therefore, investigations of the slabs dislocation's mechanism and development might be meaningful to the engineering design of the CFRD. In this study, based on the previous studies by the authors, the slabs dislocation phenomenon of the Zipingpu CFRD was investigated. The procedure and constitutive model of materials used for finite element analysis are consistent. The water elevation, the angel, and the strength of the construction joints were among major variables of investigation. The results indicated that the finite element procedure based on a modified generalized plasticity model and a perfect elastoplastic interface model can be used to evaluate the dislocation damage of face slabs of concrete faced rockfill dam during earthquake. The effects of the water elevation, the angel, and the strength of the construction joints are issues of major design concern under seismic loading.

  13. Numerical Simulation on Slabs Dislocation of Zipingpu Concrete Faced Rockfill Dam during the Wenchuan Earthquake Based on a Generalized Plasticity Model

    PubMed Central

    Xu, Bin; Zou, Degao

    2014-01-01

    After the Wenchuan earthquake in 2008, the Zipingpu concrete faced rockfill dam (CFRD) was found slabs dislocation between different stages slabs and the maximum value reached 17 cm. This is a new damage pattern and did not occur in previous seismic damage investigation. Slabs dislocation will affect the seepage control system of the CFRD gravely and even the safety of the dam. Therefore, investigations of the slabs dislocation's mechanism and development might be meaningful to the engineering design of the CFRD. In this study, based on the previous studies by the authors, the slabs dislocation phenomenon of the Zipingpu CFRD was investigated. The procedure and constitutive model of materials used for finite element analysis are consistent. The water elevation, the angel, and the strength of the construction joints were among major variables of investigation. The results indicated that the finite element procedure based on a modified generalized plasticity model and a perfect elastoplastic interface model can be used to evaluate the dislocation damage of face slabs of concrete faced rockfill dam during earthquake. The effects of the water elevation, the angel, and the strength of the construction joints are issues of major design concern under seismic loading. PMID:25013857

  14. Fast spot-based multiscale simulations of granular drainage

    SciTech Connect

    Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.

    2009-05-22

    We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.

  15. Steady flow of smooth, inelastic particles on a bumpy inclined plane: hard and soft particle simulations.

    PubMed

    Tripathi, Anurag; Khakhar, D V

    2010-04-01

    We study smooth, slightly inelastic particles flowing under gravity on a bumpy inclined plane using event-driven and discrete-element simulations. Shallow layers (ten particle diameters) are used to enable simulation using the event-driven method within reasonable computational times. Steady flows are obtained in a narrow range of angles (13 degrees-14.5 degrees); lower angles result in stopping of the flow and higher angles in continuous acceleration. The flow is relatively dense with the solid volume fraction, nu approximately 0.5 , and significant layering of particles is observed. We derive expressions for the stress, heat flux, and dissipation for the hard and soft particle models from first principles. The computed mean velocity, temperature, stress, dissipation, and heat flux profiles of hard particles are compared to soft particle results for different values of stiffness constant (k). The value of stiffness constant for which results for hard and soft particles are identical is found to be k>or=2x10(6) mg/d, where m is the mass of a particle, g is the acceleration due to gravity, and d is the particle diameter. We compare the simulation results to constitutive relations obtained from the kinetic theory of Jenkins and Richman [J. T. Jenkins and M. W. Richman, Arch. Ration. Mech. Anal. 87, 355 (1985)] for pressure, dissipation, viscosity, and thermal conductivity. We find that all the quantities are very well predicted by kinetic theory for volume fractions nu<0.5. At higher densities, obtained for thicker layers (H=15d and H=20d), the kinetic theory does not give accurate prediction. Deviations of the kinetic theory predictions from simulation results are relatively small for dissipation and heat flux and most significant deviations are observed for shear viscosity and pressure. The results indicate the range of applicability of soft particle simulations and kinetic theory for dense flows.

  16. Evaluation of the efficiency and accuracy of new methods for atmospheric opacity and radiative transfer calculations in planetary general circulation model simulations

    NASA Astrophysics Data System (ADS)

    Zube, Nicholas Gerard; Zhang, Xi; Natraj, Vijay

    2016-10-01

    General circulation models often incorporate simple approximations of heating between vertically inhomogeneous layers rather than more accurate but computationally expensive radiative transfer (RT) methods. With the goal of developing a GCM package that can model both solar system bodies and exoplanets, it is vital to examine up-to-date RT models to optimize speed and accuracy for heat transfer calculations. Here, we examine a variety of interchangeable radiative transfer models in conjunction with MITGCM (Hill and Marshall, 1995). First, for atmospheric opacity calculations, we test gray approximation, line-by-line, and correlated-k methods. In combination with these, we also test RT routines using 2-stream DISORT (discrete ordinates RT), N-stream DISORT (Stamnes et al., 1988), and optimized 2-stream (Spurr and Natraj, 2011). Initial tests are run using Jupiter as an example case. The results can be compared in nine possible configurations for running a complete RT routine within a GCM. Each individual combination of opacity and RT methods is contrasted with the "ground truth" calculation provided by the line-by-line opacity and N-stream DISORT, in terms of computation speed and accuracy of the approximation methods. We also examine the effects on accuracy when performing these calculations at different time step frequencies within MITGCM. Ultimately, we will catalog and present the ideal RT routines that can replace commonly used approximations within a GCM for a significant increase in calculation accuracy, and speed comparable to the dynamical time steps of MITGCM. Future work will involve examining whether calculations in the spatial domain can also be reduced by smearing grid points into larger areas, and what effects this will have on overall accuracy.

  17. Altitude distribution of tropospheric ozone over the Northern Hemisphere during 1996, simulated with a chemistry-general circulation model at two different horizontal resolutions

    NASA Astrophysics Data System (ADS)

    Kentarchos, A. S.; Roelofs, G. J.; Lelieveld, J.

    2001-01-01

    The spatial/temporal variability of the vertical distribution of tropospheric ozone in the Northern Hemisphere (NH) over a period of 1 year (1996) is studied with a coupled chemistry-general circulation model. The model is used at two different horizontal resolutions (T30: 3.75°×3.75° and T63: 1.875°×1.875°) and is nudged towards European Centre for Medium Range Weather Forecasts analyses for 1996, using a four-dimensional assimilation technique (newtonian relaxation), to enable direct comparisons of observations and model results. Overall, the model reproduces satisfactorily the magnitude and seasonal variability of the vertical ozone distribution observed at six selected locations. Discrepancies occur, however, at remote locations in the subtropical Atlantic and tropical Pacific where ozone concentrations throughout the free troposphere are overestimated by the fourth version of the European Centre Hamburg Model (ECHAM4)-T30. A considerable improvement is evident at T63, which can be attributed, at least partially, to less efficient transport of ozone precursors from the polluted continents at higher resolution. In the upper troposphere/tropopause region, short-term ozone variations are better reproduced at higher resolution. The origin of tropospheric ozone is examined by decomposing its seasonal variation in the model into ozone from the stratosphere and ozone produced within the troposphere. Differences in the NH annual tropospheric ozone budget for 1996, between T30 and T63 mean amounts are relatively small. The tropospheric ozone budget is dominated by photochemical production and destruction (2716 and 2684 Tg, respectively), while the net ozone flux from the stratosphere is estimated to be 436 Tg, and dry deposition is estimated to be 487 Tg.

  18. Integrated computer control system CORBA-based simulator FY98 LDRD project final summary report

    SciTech Connect

    Bryant, R M; Holloway, F W; Van Arsdall, P J

    1999-01-15

    The CORBA-based Simulator was a Laboratory Directed Research and Development (LDRD) project that applied simulation techniques to explore critical questions about distributed control architecture. The simulator project used a three-prong approach comprised of a study of object-oriented distribution tools, computer network modeling, and simulation of key control system scenarios. This summary report highlights the findings of the team and provides the architectural context of the study. For the last several years LLNL has been developing the Integrated Computer Control System (ICCS), which is an abstract object-oriented software framework for constructing distributed systems. The framework is capable of implementing large event-driven control systems for mission-critical facilities such as the National Ignition Facility (NIF). Tools developed in this project were applied to the NIF example architecture in order to gain experience with a complex system and derive immediate benefits from this LDRD. The ICCS integrates data acquisition and control hardware with a supervisory system, and reduces the amount of new coding and testing necessary by providing prebuilt components that can be reused and extended to accommodate specific additional requirements. The framework integrates control point hardware with a supervisory system by providing the services needed for distributed control such as database persistence, system start-up and configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. The design is interoperable among computers of different kinds and provides plug-in software connections by leveraging a common object request brokering architecture (CORBA) to transparently distribute software objects across the network of computers. Because object broker distribution applied to control systems is relatively new and its inherent performance is roughly threefold less than traditional point

  19. Hard-disk equation of state: first-order liquid-hexatic transition in two dimensions with three simulation methods.

    PubMed

    Engel, Michael; Anderson, Joshua A; Glotzer, Sharon C; Isobe, Masaharu; Bernard, Etienne P; Krauth, Werner

    2013-04-01

    We report large-scale computer simulations of the hard-disk system at high densities in the region of the melting transition. Our simulations reproduce the equation of state, previously obtained using the event-chain Monte Carlo algorithm, with a massively parallel implementation of the local Monte Carlo method and with event-driven molecular dynamics. We analyze the relative performance of these simulation methods to sample configuration space and approach equilibrium. Our results confirm the first-order nature of the melting phase transition in hard disks. Phase coexistence is visualized for individual configurations via the orientational order parameter field. The analysis of positional order confirms the existence of the hexatic phase.

  20. Simulation of networks of spiking neurons: A review of tools and strategies

    PubMed Central

    Brette, Romain; Rudolph, Michelle; Carnevale, Ted; Hines, Michael; Beeman, David; Bower, James M.; Diesmann, Markus; Morrison, Abigail; Goodman, Philip H.; Harris, Frederick C.; Zirpe, Milind; Natschläger, Thomas; Pecevski, Dejan; Ermentrout, Bard; Djurfeldt, Mikael; Lansner, Anders; Rochel, Olivier; Vieville, Thierry; Muller, Eilif; Davison, Andrew P.; El Boustani, Sami

    2009-01-01

    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin–Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks. PMID:17629781

  1. General Conformity

    EPA Pesticide Factsheets

    The General Conformity requirements ensure that the actions taken by federal agencies in nonattainment and maintenance areas do not interfere with a state’s plans to meet national standards for air quality.

  2. General paresis

    MedlinePlus

    ... due to damage to the brain from untreated syphilis. Causes General paresis is one form of neurosyphilis . ... usually occurs in people who have had untreated syphilis for many years. Syphilis is bacterial infection that ...

  3. General Dentist

    MedlinePlus

    ... information you need from the Academy of General Dentistry Sunday, April 9, 2017 About | Contact InfoBites Quick ... Instead of specializing in just one area of dentistry, they can provide plenty of different services for ...

  4. The complete general secretory pathway in gram-negative bacteria.

    PubMed Central

    Pugsley, A P

    1993-01-01

    The unifying feature of all proteins that are transported out of the cytoplasm of gram-negative bacteria by the general secretory pathway (GSP) is the presence of a long stretch of predominantly hydrophobic amino acids, the signal sequence. The interaction between signal sequence-bearing proteins and the cytoplasmic membrane may be a spontaneous event driven by the electrochemical energy potential across the cytoplasmic membrane, leading to membrane integration. The translocation of large, hydrophilic polypeptide segments to the periplasmic side of this membrane almost always requires at least six different proteins encoded by the sec genes and is dependent on both ATP hydrolysis and the electrochemical energy potential. Signal peptidases process precursors with a single, amino-terminal signal sequence, allowing them to be released into the periplasm, where they may remain or whence they may be inserted into the outer membrane. Selected proteins may also be transported across this membrane for assembly into cell surface appendages or for release into the extracellular medium. Many bacteria secrete a variety of structurally different proteins by a common pathway, referred to here as the main terminal branch of the GSP. This recently discovered branch pathway comprises at least 14 gene products. Other, simpler terminal branches of the GSP are also used by gram-negative bacteria to secrete a more limited range of extracellular proteins. PMID:8096622

  5. Simulating Voids

    NASA Astrophysics Data System (ADS)

    Goldberg, David M.; Vogeley, Michael S.

    2004-04-01

    We present a novel method for the simulation of the interior of large cosmic voids, suitable for the study of the formation and evolution of objects lying within such regions. Following Birkhoff's theorem, void regions dynamically evolve as universes with cosmological parameters that depend on the underdensity of the void. We derive the values of ΩM, ΩΛ, and H0 that describe this evolution. We examine how the growth rate of structure and scale factor in a void differ from the background universe. Together with a prescription for the power spectrum of fluctuations, these equations provide the initial conditions for running specialized void simulations. The increased efficiency of such simulations, in comparison to general-purpose simulations, allows an improvement of upward of 20 in the mass resolution. As a sanity check, we run a moderate-resolution simulation (N=1283 particles) and confirm that the resulting mass function of void halos is consistent with other theoretical and numerical models.

  6. Meaningful timescales from Monte Carlo simulations of particle systems with hard-core interactions

    NASA Astrophysics Data System (ADS)

    Costa, Liborio I.

    2016-12-01

    A new Markov Chain Monte Carlo method for simulating the dynamics of particle systems characterized by hard-core interactions is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Monte Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.

  7. Generalized synchronization via nonlinear control.

    PubMed

    Juan, Meng; Xingyuan, Wang

    2008-06-01

    In this paper, the generalized synchronization problem of drive-response systems is investigated. Using the drive-response concept and the nonlinear control theory, a control law is designed to achieve the generalized synchronization of chaotic systems. Based on the Lyapunov stability theory, a generalized synchronization condition is derived. Theoretical analyses and numerical simulations further demonstrate the feasibility and effectiveness of the proposed technique.

  8. An algebraic variational multiscale-multigrid method for large-eddy simulation: generalized-α time integration, Fourier analysis and application to turbulent flow past a square-section cylinder

    NASA Astrophysics Data System (ADS)

    Gravemeier, Volker; Kronbichler, Martin; Gee, Michael W.; Wall, Wolfgang A.

    2011-02-01

    This article studies three aspects of the recently proposed algebraic variational multiscale-multigrid method for large-eddy simulation of turbulent flow. First, the method is integrated into a second-order-accurate generalized-α time-stepping scheme. Second, a Fourier analysis of a simplified model problem is performed to assess the impact of scale separation on the overall performance of the method. The analysis reveals that scale separation implemented by projective operators provides modeling effects very close to an ideal small-scale subgrid viscosity, that is, it preserves low frequencies, in contrast to non-projective scale separations. Third, the algebraic variational multiscale-multigrid method is applied to turbulent flow past a square-section cylinder. The computational results obtained with the method reveal, on the one hand, the good accuracy achievable for this challenging test case already at a rather coarse discretization and, on the other hand, the superior computing efficiency, e.g., compared to a traditional dynamic Smagorinsky modeling approach.

  9. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  10. Multipebble Simulations for Alternating Automata

    NASA Astrophysics Data System (ADS)

    Clemente, Lorenzo; Mayr, Richard

    We study generalized simulation relations for alternating Büchi automata (ABA), as well as alternating finite automata. Having multiple pebbles allows the Duplicator to "hedge her bets" and delay decisions in the simulation game, thus yielding a coarser simulation relation. We define (k 1,k 2)-simulations, with k 1/k 2 pebbles on the left/right, respectively. This generalizes previous work on ordinary simulation (i.e., (1,1)-simulation) for nondeterministic Büchi automata (NBA)[4] in and ABA in [5], and (1,k)-simulation for NBA in [3].

  11. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  12. Simulation of hard-disk flow in microchannels.

    PubMed

    Shen, Guofei; Ge, Wei

    2010-01-01

    The dynamic flow behavior of a hard-disk fluid under external force field in two-dimensional microchannels is investigated using an event-driven molecular dynamics simulation method. Simulations have been carried out under laminar and subsonic conditions in both slip regime and transition regime, and the effects of three main factors, Knudsen number (Kn), force field intensity, and packing fraction, on flow and heat transfer behavior have been studied. It is shown that all the factors play important roles in the velocity distribution of the flow, and the temperature profile of the gas flow may exhibit a bimodal shape with a local minimum instead of a maximum in the center. These findings verify the predictions of nonequilibrium kinetic theories on the so-called "temperature dip." At high Kn, the two maxima of temperature shift to two walls and the temperature profile changes to a "parabola" opening upward with a minimum in the center. A slight setback of the temperature is also found before the fluid flow eventually arrives at a steady state when the shear rate is high enough.

  13. Nonlocal General Relativity

    NASA Astrophysics Data System (ADS)

    Mashhoon, Bahram

    2014-12-01

    A brief account of the present status of the recent nonlocal generalization of Einstein's theory of gravitation is presented. The main physical assumptions that underlie this theory are described. We clarify the physical meaning and significance of Weitzenbock's torsion and emphasize its intimate relationship with the gravitational field, characterized by the Riemannian curvature of spacetime. In this theory, nonlocality can simulate dark matter; in fact, in the Newtonian regime, we recover the phenomenological Tohline-Kuhn approach to modified gravity. To account for the observational data regarding dark matter, nonlocality is associated with a characteristic length scale of order 1 kpc. The confrontation of nonlocal gravity with observation is briefly discussed.

  14. Optimization Of Simulated Trajectories

    NASA Technical Reports Server (NTRS)

    Brauer, Garry L.; Olson, David W.; Stevenson, Robert

    1989-01-01

    Program To Optimize Simulated Trajectories (POST) provides ability to target and optimize trajectories of point-mass powered or unpowered vehicle operating at or near rotating planet. Used successfully to solve wide variety of problems in mechanics of atmospheric flight and transfer between orbits. Generality of program demonstrated by its capability to simulate up to 900 distinct trajectory phases, including generalized models of planets and vehicles. VAX version written in FORTRAN 77 and CDC version in FORTRAN V.

  15. Simulation: What Is It?

    ERIC Educational Resources Information Center

    Laktasic, Stanley G.

    Simulation is the duplication of the essential characteristics of a task or situation. It represents three elements of the teaching-learning process: (1) stimulus situation; (2) response; and (3) feedback. The uses of simulation may fall into one of three general categories: research--the generation of information about an operational or proposed…

  16. Bridge Crossing Simulator

    DTIC Science & Technology

    2014-10-07

    reaction structure, and a control system, the BCS physically simulates vehicular crossing loads for the expected life span of the bridge undergoing...information provided by the customer. In general, statics will be used to determine equations for moments and deflections based on the specific bridge ...systems. Through the use of hydraulic actuators, a reaction structure, and a control system, the Bridge Crossing Simulator physically simulates vehicular

  17. A Generalization of Generalized Fibonacci and Generalized Pell Numbers

    ERIC Educational Resources Information Center

    Abd-Elhameed, W. M.; Zeyada, N. A.

    2017-01-01

    This paper is concerned with developing a new class of generalized numbers. The main advantage of this class is that it generalizes the two classes of generalized Fibonacci numbers and generalized Pell numbers. Some new identities involving these generalized numbers are obtained. In addition, the two well-known identities of Sury and Marques which…

  18. I. Cognitive and instructional factors relating to students' development of personal models of chemical systems in the general chemistry laboratory II. Solvation in supercritical carbon dioxide/ethanol mixtures studied by molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Anthony, Seth

    Part I. Students' participation in inquiry-based chemistry laboratory curricula, and, in particular, engagement with key thinking processes in conjunction with these experiences, is linked with success at the difficult task of "transfer"---applying their knowledge in new contexts to solve unfamiliar types of problems. We investigate factors related to classroom experiences, student metacognition, and instructor feedback that may affect students' engagement in key aspects of the Model-Observe-Reflect-Explain (MORE) laboratory curriculum - production of written molecular-level models of chemical systems, describing changes to those models, and supporting those changes with reference to experimental evidence---and related behaviors. Participation in introductory activities that emphasize reviewing and critiquing of sample models and peers' models are associated with improvement in several of these key aspects. When students' self-assessments of the quality of aspects of their models are solicited, students are generally overconfident in the quality of their models, but these self-ratings are also sensitive to the strictness of grades assigned by their instructor. Furthermore, students who produce higher-quality models are also more accurate in their self-assessments, suggesting the importance of self-evaluation as part of the model-writing process. While the written feedback delivered by instructors did not have significant impacts on student model quality or self-assessments, students' resubmissions of models were significantly improved when students received "reflective" feedback prompting them to self-evaluate the quality of their models. Analysis of several case studies indicates that the content and extent of molecular-level ideas expressed in students' models are linked with the depth of discussion and content of discussion that occurred during the laboratory period, with ideas developed or personally committed to by students during the laboratory period being

  19. Simulation of the 1986-1987 El Niño and 1988 La Niña events with a free surface tropical Pacific Ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rong-Hua; Endoh, Masahiro

    1994-04-01

    Observed atmospheric forcing fields over the period 1984-1989 force a free surface tropical Pacific Ocean general circulation model. Numerical simulation of the 1986-1987 El Niño and 1988 La Niña events is presented in the paper. Some quantitative comparisons between model time series and corresponding observations of sea level, and upper ocean current and temperature are made to verify the model performance. Diagnostic analyses of heat balance and available energy budget are given as well. The space-time evolution of various model variables demonstrates that the model produces interannual variations with reasonable success. Beginning in mid-1986, westerly wind over the western equatorial Pacific drives strong eastward surface currents which accomplish the massive transfer of warm surface water. The strong westerly wind in late 1986 excites the pronounced equatorial Kelvin waves, which propagate eastward toward the eastern and coastal Pacific where they depress the thermocline and raise sea level twice, and increase sea surface temperature. The eastern Pacific warming occurs primarily from the diminished cooling contribution of vertical advection, whereas in the central Pacific, eastward advection by anomalous zonal flows is the principal mechanism. The El Niño conditions in the eastern Pacific disappear in mid-1987, whereas they remain in the central and western Pacific until early 1988. Subsequently, the tropical Pacific Ocean rebounds to significant La Niña conditions. Available energy (AE) has a good phase relationship with respect to other variables characterized by warm and cold conditions. AE is anomalously high prior to a warm event, accompanying conversion from kinetic energy (KE) to available potential energy (APE). During the development of El Niño, although relaxation of trade wind reduces input of wind energy, the appearance of westerly wind in the western Pacific leads to a sharp increase in KE. This excites excessive conversion from APE to KE

  20. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  1. Event-Driven On-Board Software Using Priority-Based Communications Protocols

    NASA Astrophysics Data System (ADS)

    Fowell, S.; Ward, R.; Plummer, C.

    This paper describes current projects being performed by SciSys in the area of the use of software agents, built using CORBA middleware and SOIF communications protocols, to improve operations within autonomous satellite/ground systems. These concepts have been developed and demonstrated in a series of experiments variously funded by ESA's Technology Flight Opportunity Initiative (TFO) and Leading Edge Technology for SMEs (LET-SME), and the British National Space Centre's (BNSC) National Technology Programme. In [1] it is proposed that on-board software should evolve to one that uses an architecture of loosely -coupled software agents, integrated using minimum Real- Time CORBA ORBs such as the SciSys microORB. Building on that, this paper considers the requirements such an architecture and implementation place on the underlying communication protocols (software and hardware) and how these may be met by the emerging CCSDS SOIF recommendations. 2. TRENDS AND ISSUES 2.

  2. Eocene global warming events driven by ventilation of oceanic dissolved organic carbon.

    PubMed

    Sexton, Philip F; Norris, Richard D; Wilson, Paul A; Pälike, Heiko; Westerhold, Thomas; Röhl, Ursula; Bolton, Clara T; Gibbs, Samantha

    2011-03-17

    'Hyperthermals' are intervals of rapid, pronounced global warming known from six episodes within the Palaeocene and Eocene epochs (∼65-34 million years (Myr) ago). The most extreme hyperthermal was the ∼170 thousand year (kyr) interval of 5-7 °C global warming during the Palaeocene-Eocene Thermal Maximum (PETM, 56 Myr ago). The PETM is widely attributed to massive release of greenhouse gases from buried sedimentary carbon reservoirs, and other, comparatively modest, hyperthermals have also been linked to the release of sedimentary carbon. Here we show, using new 2.4-Myr-long Eocene deep ocean records, that the comparatively modest hyperthermals are much more numerous than previously documented, paced by the eccentricity of Earth's orbit and have shorter durations (∼40 kyr) and more rapid recovery phases than the PETM. These findings point to the operation of fundamentally different forcing and feedback mechanisms than for the PETM, involving redistribution of carbon among Earth's readily exchangeable surface reservoirs rather than carbon exhumation from, and subsequent burial back into, the sedimentary reservoir. Specifically, we interpret our records to indicate repeated, large-scale releases of dissolved organic carbon (at least 1,600 gigatonnes) from the ocean by ventilation (strengthened oxidation) of the ocean interior. The rapid recovery of the carbon cycle following each Eocene hyperthermal strongly suggests that carbon was re-sequestered by the ocean, rather than the much slower process of silicate rock weathering proposed for the PETM. Our findings suggest that these pronounced climate warming events were driven not by repeated releases of carbon from buried sedimentary sources, but, rather, by patterns of surficial carbon redistribution familiar from younger intervals of Earth history.

  3. Event-driven sediment flux in Hueneme and Mugu submarine canyons, southern California

    USGS Publications Warehouse

    Xu, J. P.; Swarzenski, P.W.; Noble, M.; Li, A.-C.

    2010-01-01

    Vertical sediment fluxes and their dominant controlling processes in Hueneme and Mugu submarine canyons off south-central California were assessed using data from sediment traps and current meters on two moorings that were deployed for 6 months during the winter of 2007. The maxima of total particulate flux, which reached as high as 300+ g/m2/day in Hueneme Canyon, were recorded during winter storm events when high waves and river floods often coincided. During these winter storms, wave-induced resuspension of shelf sediment was a major source for the elevated sediment fluxes. Canyon rim morphology, rather than physical proximity to an adjacent river mouth, appeared to control the magnitude of sediment fluxes in these two submarine canyon systems. Episodic turbidity currents and internal bores enhanced sediment fluxes, particularly in the lower sediment traps positioned 30 m above the canyon floor. Lower excess 210Pb activities measured in the sediment samples collected during periods of peak total particulate flux further substantiate that reworked shelf-, rather than newly introduced river-borne, sediments supply most of the material entering these canyons during storms.

  4. Rare measurements of a sprite with halo event driven by a negative lightning discharge over Argentina

    USGS Publications Warehouse

    Taylor, M.J.; Bailey, M.A.; Pautet, P.D.; Cummer, S.A.; Jaugey, N.; Thomas, J.N.; Solorzano, N.N.; Sao, Sabbas F.; Holzworth, R.H.; Pinto, O.; Schuch, N.J.

    2008-01-01

    As part of a collaborative campaign to investigate Transient Lummous Events (TLEs) over South America, coordinated optical, ELF/VLF, and lightning measurements were made of a mesoscale thunderstorm observed on February 22-23, 2006 over northern Argentina that produced 445 TLEs within a ???6 hour period. Here, we report comprehensive measurements of one of these events, a sprite with halo that was unambiguously associated with a large negative cloud-to-ground (CG) lightning discharge with an impulsive vertical charge moment change (??MQv) of -503 C.km. This event was similar in its location, morphology and duration to other positive TLEs observed from this storm. However, the downward extent of the negative streamers was limited to 25 km, and their apparent brightness was lower than that of a comparable positive event. Observations of negative CG events are rare, and these measurements provide fin-ther evidence that sprites can be driven by upward as well as downward electric fields, as predicted by the conventional breakdown mechanism. Copyright 2008 by the American Geophysical Union.

  5. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  6. An Event-driven, Value-based, Pull Systems Engineering Scheduling Approach

    DTIC Science & Technology

    2012-03-01

    organizational structures like other agile methods (e.g. Scrum ). Such systems can be set up in individual projects and allowed to evolve into more...and organizational tools for large-scale Scrum . Pearson Education, 2008. [7] M. Poppendieck and T. Poppendieck, Implementing Lean Software

  7. Ultra-Low Power Event-Driven Wireless Sensor Node Using Piezoelectric Accelerometer for Health Monitoring

    NASA Astrophysics Data System (ADS)

    Okada, Hironao; Kobayashi, Takeshi; Masuda, Takashi; Itoh, Toshihiro

    2009-07-01

    We describe a low power consumption wireless sensor node designed for monitoring the conditions of animals, especially of chickens. The node detects variations in 24-h behavior patterns by acquiring the number of the movement of an animal whose acceleration exceeds a threshold measured in per unit time. Wireless sensor nodes when operated intermittently are likely to miss necessary data during their sleep mode state and waste the power in the case of acquiring useless data. We design the node worked only when required acceleration is detected using a piezoelectric accelerometer and a comparator for wake-up source of micro controller unit.

  8. Relativistic electron precipitation events driven by electromagnetic ion-cyclotron waves

    SciTech Connect

    Khazanov, G. Sibeck, D.; Tel'nikhin, A.; Kronberg, T.

    2014-08-15

    We adopt a canonical approach to describe the stochastic motion of relativistic belt electrons and their scattering into the loss cone by nonlinear EMIC waves. The estimated rate of scattering is sufficient to account for the rate and intensity of bursty electron precipitation. This interaction is shown to result in particle scattering into the loss cone, forming ∼10 s microbursts of precipitating electrons. These dynamics can account for the statistical correlations between processes of energization, pitch angle scattering, and relativistic electron precipitation events, that are manifested on large temporal scales of the order of the diffusion time ∼tens of minutes.

  9. Effects of climate events driven hydrodynamics on dissolved oxygen in a subtropical deep reservoir in Taiwan.

    PubMed

    Fan, Cheng-Wei; Kao, Shuh-Ji

    2008-04-15

    The seasonal concentrations of dissolved oxygen in a subtropical deep reservoir were studied over a period of one year. The study site was the Feitsui Reservoir in Taiwan. It is a dam-constructed reservoir with a surface area of 10.24 km(2) and a mean depth of 39.6 m, with a maximum depth of 113.5 m near the dam. It was found that certain weather and climate events, such as typhoons in summer and autumn, as well as cold fronts in winter, can deliver oxygen-rich water, and consequently have strong impacts on the dissolved oxygen level. The typhoon turbidity currents and winter density currents played important roles in supplying oxygen to the middle and bottom water, respectively. The whole process can be understood by the hydrodynamics driven by weather and climate events. This work provides the primary results of dissolved oxygen in a subtropical deep reservoir, and the knowledge is useful in understanding water quality in subtropical regions.

  10. An event driven hybrid identity management approach to privacy enhanced e-health.

    PubMed

    Sánchez-Guerrero, Rosa; Almenárez, Florina; Díaz-Sánchez, Daniel; Marín, Andrés; Arias, Patricia; Sanvido, Fabio

    2012-01-01

    Credential-based authorization offers interesting advantages for ubiquitous scenarios involving limited devices such as sensors and personal mobile equipment: the verification can be done locally; it offers a more reduced computational cost than its competitors for issuing, storing, and verification; and it naturally supports rights delegation. The main drawback is the revocation of rights. Revocation requires handling potentially large revocation lists, or using protocols to check the revocation status, bringing extra communication costs not acceptable for sensors and other limited devices. Moreover, the effective revocation consent--considered as a privacy rule in sensitive scenarios--has not been fully addressed. This paper proposes an event-based mechanism empowering a new concept, the sleepyhead credentials, which allows to substitute time constraints and explicit revocation by activating and deactivating authorization rights according to events. Our approach is to integrate this concept in IdM systems in a hybrid model supporting delegation, which can be an interesting alternative for scenarios where revocation of consent and user privacy are critical. The delegation includes a SAML compliant protocol, which we have validated through a proof-of-concept implementation. This article also explains the mathematical model describing the event-based model and offers estimations of the overhead introduced by the system. The paper focus on health care scenarios, where we show the flexibility of the proposed event-based user consent revocation mechanism.

  11. Time Series Modeling of Army Mission Command Communication Networks: An Event-Driven Analysis

    DTIC Science & Technology

    2013-06-01

    critical events. In a detailed analysis of the email corpus of the Enron Corporation, Diesner and Carley (2005; see also Murshed et al. 2007) found that...established contacts and formal roles. The Enron crisis is instructive as a network with a critical period of failure. Other researchers have also found...Diesner, J., Frantz, T. L., & Carley, K. M. (2005). Communication networks from the Enron email corpus “It’s always about the people. Enron is no

  12. An event-driven phytoplankton bloom in southern Lake Michigan observed by satellite.

    SciTech Connect

    Lesht, B. M.; Stroud, J. R.; McCormick, M. J.; Fahnenstiel, G. L.; Stein, M. L.; Welty, L. J.; Leshkevich, G. A.; Environmental Research; Univ. of Chicago; Great Lakes Research Lab.

    2002-04-15

    Sea-viewing Wide Field-of-View Sensor (SeaWiFS) images from June 1998 show a surprising early summer phytoplankton bloom in southern Lake Michigan that accounted for approximately 25% of the lake's annual gross offshore algal primary production. By combining the satellite imagery with in situ measurements of water temperature and wind velocity we show that the bloom was triggered by a brief wind event that was sufficient to cause substantial vertical mixing even though the lake was already stratified. We conclude that episodic events can have significant effects on the biological state of large lakes and should be included in biogeochemical process models.

  13. An Event Driven Hybrid Identity Management Approach to Privacy Enhanced e-Health

    PubMed Central

    Sánchez-Guerrero, Rosa; Almenárez, Florina; Díaz-Sánchez, Daniel; Marín, Andrés; Arias, Patricia; Sanvido, Fabio

    2012-01-01

    Credential-based authorization offers interesting advantages for ubiquitous scenarios involving limited devices such as sensors and personal mobile equipment: the verification can be done locally; it offers a more reduced computational cost than its competitors for issuing, storing, and verification; and it naturally supports rights delegation. The main drawback is the revocation of rights. Revocation requires handling potentially large revocation lists, or using protocols to check the revocation status, bringing extra communication costs not acceptable for sensors and other limited devices. Moreover, the effective revocation consent—considered as a privacy rule in sensitive scenarios—has not been fully addressed. This paper proposes an event-based mechanism empowering a new concept, the sleepyhead credentials, which allows to substitute time constraints and explicit revocation by activating and deactivating authorization rights according to events. Our approach is to integrate this concept in IdM systems in a hybrid model supporting delegation, which can be an interesting alternative for scenarios where revocation of consent and user privacy are critical. The delegation includes a SAML compliant protocol, which we have validated through a proof-of-concept implementation. This article also explains the mathematical model describing the event-based model and offers estimations of the overhead introduced by the system. The paper focus on health care scenarios, where we show the flexibility of the proposed event-based user consent revocation mechanism. PMID:22778634

  14. Functional Flow and Event-Driven Methods for Predicting System Performance

    DTIC Science & Technology

    2015-09-01

    21. 2. SAR Mission initiates; SAR Assets conduct search but no objects of interest are found; SAR assets continue to scan but OSC aborts mission...be related to the SAR, so the OSC aborts mission and all Assets RTB. 45 4. SAR Mission initiates; SAR Assets conduct search and find an object of...interest; the object of interest is determined to be wreckage so the OSC aborts mission and all Assets RTB. 5. SAR Mission initiates; SAR Assets

  15. On the use of orientation filters for 3D reconstruction in event-driven stereo vision.

    PubMed

    Camuñas-Mesa, Luis A; Serrano-Gotarredona, Teresa; Ieng, Sio H; Benosman, Ryad B; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

  16. Event-Driven Messaging for Offline Data Quality Monitoring at ATLAS

    NASA Astrophysics Data System (ADS)

    Onyisi, Peter

    2015-12-01

    During LHC Run 1, the information flow through the offline data quality monitoring in ATLAS relied heavily on chains of processes polling each other's outputs for handshaking purposes. This resulted in a fragile architecture with many possible points of failure and an inability to monitor the overall state of the distributed system. We report on the status of a project undertaken during the LHC shutdown to replace the ad hoc synchronization methods with a uniform message queue system. This enables the use of standard protocols to connect processes on multiple hosts; reliable transmission of messages between possibly unreliable programs; easy monitoring of the information flow; and the removal of inefficient polling-based communication.

  17. Piezoelectric MEMS switch to activate event-driven wireless sensor nodes

    NASA Astrophysics Data System (ADS)

    Nogami, H.; Kobayashi, T.; Okada, H.; Makimoto, N.; Maeda, R.; Itoh, T.

    2013-09-01

    We have developed piezoelectric microelectromechanical systems (MEMS) switches and applied them to ultra-low power wireless sensor nodes, to monitor the health condition of chickens. The piezoelectric switches have ‘S’-shaped piezoelectric cantilevers with a proof mass. Since the resonant frequency of the piezoelectric switches is around 24 Hz, we have utilized their superharmonic resonance to detect chicken movements as low as 5-15 Hz. When the vibration frequency is 4, 6 and 12 Hz, the piezoelectric switches vibrate at 0.5 m s-2 and generate 3-5 mV output voltages with superharmonic resonance. In order to detect such small piezoelectric output voltages, we employ comparator circuits that can be driven at low voltages, which can set the threshold voltage (Vth) from 1 to 31 mV with a 1 mV increment. When we set Vth at 4 mV, the output voltages of the piezoelectric MEMS switches vibrate below 15 Hz with amplitudes above 0.3 m s-2 and turn on the comparator circuits. Similarly, by setting Vth at 5 mV, the output voltages turn on the comparator circuits with vibrations above 0.4 m s-2. Furthermore, setting Vth at 10 mV causes vibrations above 0.5 m s-2 that turn on the comparator circuits. These results suggest that we can select small or fast chicken movements