Event-driven simulation in SELMON: An overview of EDSE
NASA Technical Reports Server (NTRS)
Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.
1992-01-01
EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.
Extended event driven molecular dynamics for simulating dense granular matter
NASA Astrophysics Data System (ADS)
González, S.; Risso, D.; Soto, R.
2009-12-01
A new numerical method is presented to efficiently simulate the inelastic hard sphere (IHS) model for granular media, when fluid and frozen regions coexist in the presence of gravity. The IHS model is extended by allowing particles to change their dynamics into either a frozen state or back to the normal collisional state, while computing the dynamics only for the particles in the normal state. Careful criteria, local in time and space, are designed such that particles become frozen only at mechanically stable positions. The homogeneous deposition over a static surface and the dynamics of a rotating drum are studied as test cases. The simulations agree with previous experimental results. The model is much more efficient than the usual event driven method and allows to overcome some of the difficulties of the standard IHS model, such as the existence of a static limit.
NASA Astrophysics Data System (ADS)
Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi
2015-11-01
Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.
An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm
Donev, A; Garcia, A L; Alder, B J
2007-07-30
A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.
NEVESIM: event-driven neural simulation framework with a Python interface.
Pecevski, Dejan; Kappel, David; Jonke, Zeno
2014-01-01
NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291
NEVESIM: event-driven neural simulation framework with a Python interface
Pecevski, Dejan; Kappel, David; Jonke, Zeno
2014-01-01
NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291
NEVESIM: event-driven neural simulation framework with a Python interface.
Pecevski, Dejan; Kappel, David; Jonke, Zeno
2014-01-01
NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.
Self-Adaptive Event-Driven Simulation of Multi-Scale Plasma Systems
NASA Astrophysics Data System (ADS)
Omelchenko, Yuri; Karimabadi, Homayoun
2005-10-01
Multi-scale plasmas pose a formidable computational challenge. The explicit time-stepping models suffer from the global CFL restriction. Efficient application of adaptive mesh refinement (AMR) to systems with irregular dynamics (e.g. turbulence, diffusion-convection-reaction, particle acceleration etc.) may be problematic. To address these issues, we developed an alternative approach to time stepping: self-adaptive discrete-event simulation (DES). DES has origin in operations research, war games and telecommunications. We combine finite-difference and particle-in-cell techniques with this methodology by assuming two caveats: (1) a local time increment, dt for a discrete quantity f can be expressed in terms of a physically meaningful quantum value, df; (2) f is considered to be modified only when its change exceeds df. Event-driven time integration is self-adaptive as it makes use of causality rules rather than parametric time dependencies. This technique enables asynchronous flux-conservative update of solution in accordance with local temporal scales, removes the curse of the global CFL condition, eliminates unnecessary computation in inactive spatial regions and results in robust and fast parallelizable codes. It can be naturally combined with various mesh refinement techniques. We discuss applications of this novel technology to diffusion-convection-reaction systems and hybrid simulations of magnetosonic shocks.
Comments on event driven animation
NASA Technical Reports Server (NTRS)
Gomez, Julian E.
1987-01-01
Event driven animation provides a general method of describing controlling values for various computer animation techniques. A definition and comments are provided on genralizing motion description with events. Additional comments are also provided about the implementation of twixt.
NASA Astrophysics Data System (ADS)
Kovalskyy, V.; Henebry, G. M.
2011-05-01
Phenologies of the vegetated land surface are being used increasingly for diagnosis and prognosis of climate change consequences. Current prospective and retrospective phenological models stand far apart in their approaches to the subject. We report on an exploratory attempt to implement a phenological model based on a new event driven concept which has both diagnostic and prognostic capabilities in the same modeling framework. This Event Driven Phenological Model (EDPM) is shown to simulate land surface phenologies and phenophase transition dates in agricultural landscapes based on assimilation of weather data and land surface observations from spaceborne sensors. The model enables growing season phenologies to develop in response to changing environmental conditions and disturbance events. It also has the ability to ingest remotely sensed data to adjust its output to improve representation of the modeled variable. We describe the model and report results of initial testing of the EDPM using Level 2 flux tower records from the Ameriflux sites at Mead, Nebraska, USA, and at Bondville, Illinois, USA. Simulating the dynamics of normalized difference vegetation index based on flux tower data, the predictions by the EDPM show good agreement (RMSE < 0.08; r2>0.8) for maize and soybean during several growing seasons at different locations. This study presents the EDPM used in the companion paper (Kovalskyy and Henebry, 2011) in a coupling scheme to estimate daily actual evapotranspiration over multiple growing seasons.
NASA Astrophysics Data System (ADS)
Kovalskyy, V.; Henebry, G. M.
2012-01-01
Phenologies of the vegetated land surface are being used increasingly for diagnosis and prognosis of climate change consequences. Current prospective and retrospective phenological models stand far apart in their approaches to the subject. We report on an exploratory attempt to implement a phenological model based on a new event driven concept which has both diagnostic and prognostic capabilities in the same modeling framework. This Event Driven Phenological Model (EDPM) is shown to simulate land surface phenologies and phenophase transition dates in agricultural landscapes based on assimilation of weather data and land surface observations from spaceborne sensors. The model enables growing season phenologies to develop in response to changing environmental conditions and disturbance events. It also has the ability to ingest remotely sensed data to adjust its output to improve representation of the modeled variable. We describe the model and report results of initial testing of the EDPM using Level 2 flux tower records from the Ameriflux sites at Mead, Nebraska, USA, and at Bondville, Illinois, USA. Simulating the dynamics of normalized difference vegetation index based on flux tower data, the predictions by the EDPM show good agreement (RMSE < 0.08; r2 > 0.8) for maize and soybean during several growing seasons at different locations. This study presents the EDPM used in the companion paper (Kovalskyy and Henebry, 2011) in a coupling scheme to estimate daily actual evapotranspiration over multiple growing seasons.
NASA Astrophysics Data System (ADS)
Valentini, Paolo; Schwartzentruber, Thomas E.
2009-12-01
A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between ρ=10-4 kg/m and ρ=10-1 kg/m, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.
Valentini, Paolo Schwartzentruber, Thomas E.
2009-12-10
A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between {rho}=10{sup -4}kg/m{sup 3} and {rho}=10{sup -1}kg/m{sup 3}, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.
NASA Astrophysics Data System (ADS)
Pavlou, L.; Georgoudas, I. G.; Sirakoulis, G. Ch.; Scordilis, E. M.; Andreadis, I.
This paper presents an extensive simulation tool based on a Cellular Automata (CA) system that models fundamental seismic characteristics of a region. The CA-based dynamic model consists of cells-charges and it is used for the simulation of the earthquake process. The simulation tool has remarkably accelerated the response of the model by incorporating principles of the High Performance Computing (HPC). Extensive programming features of parallel computing have been applied, thus improving its processing effectiveness. The tool implements an enhanced (or hyper-) 2-dimensional version of the proposed CA model. Regional characteristics that depend on the seismic background of the area under study are assigned to the model with the application of a user-friendly software environment. The model is evaluated with real data that correspond to a circular region around Skyros Island, Greece, for different time periods, as for example one of 45 years (1901-1945). The enhanced 2-dimensional version of the model incorporates all principal characteristics of the 2-dimensional one, also including groups of CA cells that interact with others, located to a considerable distance in an attempt to simulate long-range interaction. The advanced simulation tool has been thoroughly evaluated. Several measurements have been made for different critical states, as well as for various cascade (earthquake) sizes, cell activities and different neighbourhood sizes. Simulation results qualitatively approach the Gutenberg-Richter (GR) scaling law and reveal fundamental characteristics of the system.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-02-28
We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
NASA Astrophysics Data System (ADS)
Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra
2004-08-01
In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-08-30
We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo
2015-07-01
Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.
Stochastic Event-Driven Molecular Dynamics
Donev, Aleksandar Garcia, Alejandro L.; Alder, Berni J.
2008-02-01
A novel Stochastic Event-Driven Molecular Dynamics (SEDMD) algorithm is developed for the simulation of polymer chains suspended in a solvent. SEDMD combines event-driven molecular dynamics (EDMD) with the Direct Simulation Monte Carlo (DSMC) method. The polymers are represented as chains of hard-spheres tethered by square wells and interact with the solvent particles with hard-core potentials. The algorithm uses EDMD for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in EDMD, rather, the momentum and energy exchange in the solvent is determined stochastically using DSMC. The coupling between the solvent and the solute is consistently represented at the particle level retaining hydrodynamic interactions and thermodynamic fluctuations. However, unlike full MD simulations of both the solvent and the solute, in SEDMD the spatial structure of the solvent is ignored. The SEDMD algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard-wall subjected to uniform shear. SEDMD closely reproduces results obtained using traditional EDMD simulations with two orders of magnitude greater efficiency. Results question the existence of periodic (cycling) motion of the polymer chain.
Event-Driven Process Chains (EPC)
NASA Astrophysics Data System (ADS)
Mendling, Jan
This chapter provides a comprehensive overview of Event-driven Process Chains (EPCs) and introduces a novel definition of EPC semantics. EPCs became popular in the 1990s as a conceptual business process modeling language in the context of reference modeling. Reference modeling refers to the documentation of generic business operations in a model such as service processes in the telecommunications sector, for example. It is claimed that reference models can be reused and adapted as best-practice recommendations in individual companies (see [230, 168, 229, 131, 400, 401, 446, 127, 362, 126]). The roots of reference modeling can be traced back to the Kölner Integrationsmodell (KIM) [146, 147] that was developed in the 1960s and 1970s. In the 1990s, the Institute of Information Systems (IWi) in Saarbrücken worked on a project with SAP to define a suitable business process modeling language to document the processes of the SAP R/3 enterprise resource planning system. There were two results from this joint effort: the definition of EPCs [210] and the documentation of the SAP system in the SAP Reference Model (see [92, 211]). The extensive database of this reference model contains almost 10,000 sub-models: 604 of them non-trivial EPC business process models. The SAP Reference model had a huge impact with several researchers referring to it in their publications (see [473, 235, 127, 362, 281, 427, 415]) as well as motivating the creation of EPC reference models in further domains including computer integrated manufacturing [377, 379], logistics [229] or retail [52]. The wide-spread application of EPCs in business process modeling theory and practice is supported by their coverage in seminal text books for business process management and information systems in general (see [378, 380, 49, 384, 167, 240]). EPCs are frequently used in practice due to a high user acceptance [376] and extensive tool support. Some examples of tools that support EPCs are ARIS Toolset by IDS
The three-dimensional Event-Driven Graphics Environment (3D-EDGE)
NASA Technical Reports Server (NTRS)
Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.
1993-01-01
Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.
Feasibility study for a generalized gate logic software simulator
NASA Technical Reports Server (NTRS)
Mcgough, J. G.
1983-01-01
Unit-delay simulation, event driven simulation, zero-delay simulation, simulation techniques, 2-valued versus multivalued logic, network initialization, gate operations and alternate network representations, parallel versus serial mode simulation fault modelling, extension of multiprocessor systems, and simulation timing are discussed. Functional level networks, gate equivalent circuits, the prototype BDX-930 network model, fault models, identifying detected faults for BGLOSS are discussed. Preprocessor tasks, postprocessor tasks, executive tasks, and a library of bliss coded macros for GGLOSS are also discussed.
Two-ball problem revisited: limitations of event-driven modeling.
Müller, Patric; Pöschel, Thorsten
2011-04-01
The main precondition of simulating systems of hard particles by means of event-driven modeling is the assumption of instantaneous collisions. The aim of this paper is to quantify the deviation of event-driven modeling from the solution of Newton's equation of motion using a paradigmatic example: If a tennis ball is held above a basketball with their centers vertically aligned, and the balls are released to collide with the floor, the tennis ball may rebound at a surprisingly high speed. We show in this article that the simple textbook explanation of this effect is an oversimplification, even for the limit of perfectly elastic particles. Instead, there may occur a rather complex scenario including multiple collisions which may lead to a very different final velocity as compared with the velocity resulting from the oversimplified model. PMID:21599150
Two-ball problem revisited: Limitations of event-driven modeling
NASA Astrophysics Data System (ADS)
Müller, Patric; Pöschel, Thorsten
2011-04-01
The main precondition of simulating systems of hard particles by means of event-driven modeling is the assumption of instantaneous collisions. The aim of this paper is to quantify the deviation of event-driven modeling from the solution of Newton’s equation of motion using a paradigmatic example: If a tennis ball is held above a basketball with their centers vertically aligned, and the balls are released to collide with the floor, the tennis ball may rebound at a surprisingly high speed. We show in this article that the simple textbook explanation of this effect is an oversimplification, even for the limit of perfectly elastic particles. Instead, there may occur a rather complex scenario including multiple collisions which may lead to a very different final velocity as compared with the velocity resulting from the oversimplified model.
Two-ball problem revisited: limitations of event-driven modeling.
Müller, Patric; Pöschel, Thorsten
2011-04-01
The main precondition of simulating systems of hard particles by means of event-driven modeling is the assumption of instantaneous collisions. The aim of this paper is to quantify the deviation of event-driven modeling from the solution of Newton's equation of motion using a paradigmatic example: If a tennis ball is held above a basketball with their centers vertically aligned, and the balls are released to collide with the floor, the tennis ball may rebound at a surprisingly high speed. We show in this article that the simple textbook explanation of this effect is an oversimplification, even for the limit of perfectly elastic particles. Instead, there may occur a rather complex scenario including multiple collisions which may lead to a very different final velocity as compared with the velocity resulting from the oversimplified model.
General Data Simulation Program.
ERIC Educational Resources Information Center
Burns, Edward
Described is a computer program written in FORTRAN IV which offers considerable flexibility in generating simulated data pertinent to education and educational psychology. The user is allowed to specify the number of samples, data sets, and variables, together with the population means, standard deviations and intercorrelations. In addition the…
Event management for large scale event-driven digital hardware spiking neural networks.
Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean
2013-09-01
The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms.
Intelligent fuzzy controller for event-driven real time systems
NASA Technical Reports Server (NTRS)
Grantner, Janos; Patyra, Marek; Stachowicz, Marian S.
1992-01-01
Most of the known linguistic models are essentially static, that is, time is not a parameter in describing the behavior of the object's model. In this paper we show a model for synchronous finite state machines based on fuzzy logic. Such finite state machines can be used to build both event-driven, time-varying, rule-based systems and the control unit section of a fuzzy logic computer. The architecture of a pipelined intelligent fuzzy controller is presented, and the linguistic model is represented by an overall fuzzy relation stored in a single rule memory. A VLSI integrated circuit implementation of the fuzzy controller is suggested. At a clock rate of 30 MHz, the controller can perform 3 MFLIPS on multi-dimensional fuzzy data.
A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors
Nie, Kaiming; Wang, Xinlei; Qiao, Jun; Xu, Jiangtao
2016-01-01
This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD) image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM). The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs) are used to quantize the time of photons’ arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor’s resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip’s output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5–20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes. PMID:26828490
Event-Driven Random-Access-Windowing CCD Imaging System
NASA Technical Reports Server (NTRS)
Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William
2004-01-01
A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable
Event Driven Messaging with Role-Based Subscriptions
NASA Technical Reports Server (NTRS)
Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed
2009-01-01
Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).
Event-driven contrastive divergence for spiking neuromorphic systems
Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert
2014-01-01
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality. PMID:24574952
General Reactive Atomistic Simulation Program
Thompson, Aidan P.
2004-09-22
GRASP (General Reactive Atomistic Simulation Program) is primarily intended as a molecular dynamics package for complex force fields, The code is designed to provide good performance for large systems, either in parallel or serial execution mode, The primary purpose of the code is to realistically represent the structural and dynamic properties of large number of atoms on timescales ranging from picoseconds up to a microsecond. Typically the atoms form a representative sample of some material, such as an interface between polycrystalline silicon and amorphous silica. GRASP differs from other parallel molecular dynamics codes primarily due to its ability to handle relatively complicated interaction potentials and its ability to use more than one interaction potential in a single simulation. Most of the computational effort goes into the calculation of interatomic forces, which depend in a complicated way on the positions of all the atoms. The forces are used to integrate the equations of motion forward in time using the so-called velocity Verlet integration scheme. Alternatively, the forces can be used to find a minimum energy configuration, in which case a modified steepest descent algorithm is used.
General Reactive Atomistic Simulation Program
2004-09-22
GRASP (General Reactive Atomistic Simulation Program) is primarily intended as a molecular dynamics package for complex force fields, The code is designed to provide good performance for large systems, either in parallel or serial execution mode, The primary purpose of the code is to realistically represent the structural and dynamic properties of large number of atoms on timescales ranging from picoseconds up to a microsecond. Typically the atoms form a representative sample of some material,more » such as an interface between polycrystalline silicon and amorphous silica. GRASP differs from other parallel molecular dynamics codes primarily due to its ability to handle relatively complicated interaction potentials and its ability to use more than one interaction potential in a single simulation. Most of the computational effort goes into the calculation of interatomic forces, which depend in a complicated way on the positions of all the atoms. The forces are used to integrate the equations of motion forward in time using the so-called velocity Verlet integration scheme. Alternatively, the forces can be used to find a minimum energy configuration, in which case a modified steepest descent algorithm is used.« less
Pérez-Carrasco, José Antonio; Zhao, Bo; Serrano, Carmen; Acha, Begoña; Serrano-Gotarredona, Teresa; Chen, Shouchun; Linares-Barranco, Bernabé
2013-11-01
Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules. PMID:24051730
Perez-Carrasco, J A; Zhao, B; Serrano, C; Acha, B; Serrano-Gotarredona, T; Chen, S; Linares-Barranco, B
2013-04-10
Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at “frame rate”. Event-driven vision sensors take inspiration from biology. A special type of Event-driven sensor is the so called Dynamic-Vision-Sensor (DVS) where each pixel computes relative changes of light, or “temporal contrast”. Pixel events become available with micro second delays with respect to “reality”. These events can be processed “as they flow” by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper we present a methodology for mapping from a properly trained neural network in a conventional Frame-driven representation, to an Event-driven representation. The method is illustrated by studying Event-driven Convolutional Neural Networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The Event-driven ConvNet is fed with recordings obtained from a real DVS camera. The Event-driven ConvNet is simulated with a dedicated Event-driven simulator, and consists of a number of Event-driven processing modules the characteristics of which are obtained from individually manufactured hardware modules. PMID:23589589
Event-driven management algorithm of an Engineering documents circulation system
NASA Astrophysics Data System (ADS)
Kuzenkov, V.; Zebzeev, A.; Gromakov, E.
2015-04-01
Development methodology of an engineering documents circulation system in the design company is reviewed. Discrete event-driven automatic models using description algorithms of project management is offered. Petri net use for dynamic design of projects is offered.
Liu, Yiqi; Ganigué, Ramon; Sharma, Keshab; Yuan, Zhiguo
2016-07-01
Chemicals such as Mg(OH)2 and iron salts are widely dosed to sewage for mitigating sulfide-induced corrosion and odour problems in sewer networks. The chemical dosing rate is usually not automatically controlled but profiled based on experience of operators, often resulting in over- or under-dosing. Even though on-line control algorithms for chemical dosing in single pipes have been developed recently, network-wide control algorithms are currently not available. The key challenge is that a sewer network is typically wide-spread comprising many interconnected sewer pipes and pumping stations, making network-wide sulfide mitigation with a relatively limited number of dosing points challenging. In this paper, we propose and demonstrate an Event-driven Model Predictive Control (EMPC) methodology, which controls the flows of sewage streams containing the dosed chemical to ensure desirable distribution of the dosed chemical throughout the pipe sections of interests. First of all, a network-state model is proposed to predict the chemical concentration in a network. An EMPC algorithm is then designed to coordinate sewage pumping station operations to ensure desirable chemical distribution in the network. The performance of the proposed control methodology is demonstrated by applying the designed algorithm to a real sewer network simulated with the well-established SeweX model using real sewage flow and characteristics data. The EMPC strategy significantly improved the sulfide mitigation performance with the same chemical consumption, compared to the current practice.
Liu, Yiqi; Ganigué, Ramon; Sharma, Keshab; Yuan, Zhiguo
2016-07-01
Chemicals such as Mg(OH)2 and iron salts are widely dosed to sewage for mitigating sulfide-induced corrosion and odour problems in sewer networks. The chemical dosing rate is usually not automatically controlled but profiled based on experience of operators, often resulting in over- or under-dosing. Even though on-line control algorithms for chemical dosing in single pipes have been developed recently, network-wide control algorithms are currently not available. The key challenge is that a sewer network is typically wide-spread comprising many interconnected sewer pipes and pumping stations, making network-wide sulfide mitigation with a relatively limited number of dosing points challenging. In this paper, we propose and demonstrate an Event-driven Model Predictive Control (EMPC) methodology, which controls the flows of sewage streams containing the dosed chemical to ensure desirable distribution of the dosed chemical throughout the pipe sections of interests. First of all, a network-state model is proposed to predict the chemical concentration in a network. An EMPC algorithm is then designed to coordinate sewage pumping station operations to ensure desirable chemical distribution in the network. The performance of the proposed control methodology is demonstrated by applying the designed algorithm to a real sewer network simulated with the well-established SeweX model using real sewage flow and characteristics data. The EMPC strategy significantly improved the sulfide mitigation performance with the same chemical consumption, compared to the current practice. PMID:27124127
Field Evaluation of a General Purpose Simulator.
ERIC Educational Resources Information Center
Spangenberg, Ronald W.
The use of a general purpose simulator (GPS) to teach Air Force technicians diagnostic and repair procedures for specialized aircraft radar systems is described. An EC II simulator manufactured by Educational Computer Corporation was adapted to resemble the actual configuration technicians would encounter in the field. Data acquired in the…
Simulations in generalized ensembles through noninstantaneous switches
NASA Astrophysics Data System (ADS)
Giovannelli, Edoardo; Cardini, Gianni; Chelli, Riccardo
2015-10-01
Generalized-ensemble simulations, such as replica exchange and serial generalized-ensemble methods, are powerful simulation tools to enhance sampling of free energy landscapes in systems with high energy barriers. In these methods, sampling is enhanced through instantaneous transitions of replicas, i.e., copies of the system, between different ensembles characterized by some control parameter associated with thermodynamical variables (e.g., temperature or pressure) or collective mechanical variables (e.g., interatomic distances or torsional angles). An interesting evolution of these methodologies has been proposed by replacing the conventional instantaneous (trial) switches of replicas with noninstantaneous switches, realized by varying the control parameter in a finite time and accepting the final replica configuration with a Metropolis-like criterion based on the Crooks nonequilibrium work (CNW) theorem. Here we revise these techniques focusing on their correlation with the CNW theorem in the framework of Markovian processes. An outcome of this report is the derivation of the acceptance probability for noninstantaneous switches in serial generalized-ensemble simulations, where we show that explicit knowledge of the time dependence of the weight factors entering such simulations is not necessary. A generalized relationship of the CNW theorem is also provided in terms of the underlying equilibrium probability distribution at a fixed control parameter. Illustrative calculations on a toy model are performed with serial generalized-ensemble simulations, especially focusing on the different behavior of instantaneous and noninstantaneous replica transition schemes.
Connection between Newtonian simulations and general relativity
Chisari, Nora Elisa; Zaldarriaga, Matias
2011-06-15
On large scales, comparable to the horizon, the observable clustering properties of galaxies are affected by various general relativistic effects. To calculate these effects one needs to consistently solve for the metric, densities, and velocities in a specific coordinate system or gauge. The method of choice for simulating large-scale structure is numerical N-body simulations which are performed in the Newtonian limit. Even though one might worry that the use of the Newtonian approximation would make it impossible to use these simulations to compute properties on very large scales, we show that the simulations are still solving the dynamics correctly even for long modes and we give formulas to obtain the position of particles in the conformal Newtonian gauge given the positions computed in the simulation. We also give formulas to convert from the output coordinates of N-body simulations to the observable coordinates of the particles.
ERIC Educational Resources Information Center
Lynch, John Kenneth
2013-01-01
Using an exploratory model of the 9/11 terrorists, this research investigates the linkages between Event Driven Business Process Management (edBPM) and decision making. Although the literature on the role of technology in efficient and effective decision making is extensive, research has yet to quantify the benefit of using edBPM to aid the…
A general software reliability process simulation technique
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1991-01-01
The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.
Design of an Event-Driven, Random-Access, Windowing CCD-Based Camera
NASA Astrophysics Data System (ADS)
Monacos, S. P.; Lam, R. K.; Portillo, A. A.; Zhu, D. Q.; Ortiz, G. G.
2003-11-01
Commercially available cameras are not designed for a combination of single-frame and high-speed streaming digital video with real-time control of size and location of multiple regions-of-interest (ROIs). A message-passing paradigm is defined to achieve low-level camera control with high-level system operation. This functionality is achieved by asynchronously sending messages to the camera for event-driven operation, where an event is defined as image capture or pixel readout of a ROI, without knowledge of detailed in-camera timing. This methodology provides a random access, real-time, event-driven (RARE) camera for adaptive camera control and is well suited for target-tracking applications requiring autonomous control of multiple ROIs. This methodology additionally provides for reduced ROI readout time and higher frame rates as compared to a predecessor architecture [1] by avoiding external control intervention during the ROI readout process.
Spectral Methods in General Relativistic MHD Simulations
NASA Astrophysics Data System (ADS)
Garrison, David
2012-03-01
In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.
Simulation of General Physics laboratory exercise
NASA Astrophysics Data System (ADS)
Aceituno, P.; Hernández-Aceituno, J.; Hernández-Cabrera, A.
2015-01-01
Laboratory exercises are an important part of general Physics teaching, both during the last years of high school and the first year of college education. Due to the need to acquire enough laboratory equipment for all the students, and the widespread access to computers rooms in teaching, we propose the development of computer simulated laboratory exercises. A representative exercise in general Physics is the calculation of the gravity acceleration value, through the free fall motion of a metal ball. Using a model of the real exercise, we have developed an interactive system which allows students to alter the starting height of the ball to obtain different fall times. The simulation was programmed in ActionScript 3, so that it can be freely executed in any operative system; to ensure the accuracy of the calculations, all the input parameters of the simulations were modelled using digital measurement units, and to allow a statistical management of the resulting data, measurement errors are simulated through limited randomization.
General Relativistic Magnetohydrodynamic Simulations of Collapsars
NASA Technical Reports Server (NTRS)
Mizuno, Yosuke; Yamada, S.; Koider, S.; Shipata, K.
2005-01-01
We have performed 2.5-dimensional general relativistic magnetohydrodynamic (MHD) simulations of collapsars including a rotating black hole. Initially, we assume that the core collapse has failed in this star. A rotating black hole of a few solar masses is inserted by hand into the calculation. The simulation results show the formation of a disklike structure and the generation of a jetlike outflow near the central black hole. The jetlike outflow propagates and accelerated mainly by the magnetic field. The total jet velocity is approximately 0.3c. When the rotation of the black hole is faster, the magnetic field is twisted strongly owing to the frame-dragging effect. The magnetic energy stored by the twisting magnetic field is directly converted to kinetic energy of the jet rather than propagating as an Alfven wave. Thus, as the rotation of the black hole becomes faster, the poloidal velocity of the jet becomes faster.
Chen, Jiehui; Salim, Mariam B.; Matsumoto, Mitsuji
2010-01-01
Wireless Sensor Networks (WSNs) designed for mission-critical applications suffer from limited sensing capacities, particularly fast energy depletion. Regarding this, mobile sinks can be used to balance the energy consumption in WSNs, but the frequent location updates of the mobile sinks can lead to data collisions and rapid energy consumption for some specific sensors. This paper explores an optimal barrier coverage based sensor deployment for event driven WSNs where a dual-sink model was designed to evaluate the energy performance of not only static sensors, but Static Sink (SS) and Mobile Sinks (MSs) simultaneously, based on parameters such as sensor transmission range r and the velocity of the mobile sink v, etc. Moreover, a MS mobility model was developed to enable SS and MSs to effectively collaborate, while achieving spatiotemporal energy performance efficiency by using the knowledge of the cumulative density function (cdf), Poisson process and M/G/1 queue. The simulation results verified that the improved energy performance of the whole network was demonstrated clearly and our eDSA algorithm is more efficient than the static-sink model, reducing energy consumption approximately in half. Moreover, we demonstrate that our results are robust to realistic sensing models and also validate the correctness of our results through extensive simulations. PMID:22163503
General relativistic screening in cosmological simulations
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Paranjape, Aseem
2016-10-01
We revisit the issue of interpreting the results of large volume cosmological simulations in the context of large-scale general relativistic effects. We look for simple modifications to the nonlinear evolution of the gravitational potential ψ that lead on large scales to the correct, fully relativistic description of density perturbations in the Newtonian gauge. We note that the relativistic constraint equation for ψ can be cast as a diffusion equation, with a diffusion length scale determined by the expansion of the Universe. Exploiting the weak time evolution of ψ in all regimes of interest, this equation can be further accurately approximated as a Helmholtz equation, with an effective relativistic "screening" scale ℓ related to the Hubble radius. We demonstrate that it is thus possible to carry out N-body simulations in the Newtonian gauge by replacing Poisson's equation with this Helmholtz equation, involving a trivial change in the Green's function kernel. Our results also motivate a simple, approximate (but very accurate) gauge transformation—δN(k )≈δsim(k )×(k2+ℓ-2)/k2 —to convert the density field δsim of standard collisionless N -body simulations (initialized in the comoving synchronous gauge) into the Newtonian gauge density δN at arbitrary times. A similar conversion can also be written in terms of particle positions. Our results can be interpreted in terms of a Jeans stability criterion induced by the expansion of the Universe. The appearance of the screening scale ℓ in the evolution of ψ , in particular, leads to a natural resolution of the "Jeans swindle" in the presence of superhorizon modes.
Event-driven charge-coupled device design and applications therefor
NASA Technical Reports Server (NTRS)
Doty, John P. (Inventor); Ricker, Jr., George R. (Inventor); Burke, Barry E. (Inventor); Prigozhin, Gregory Y. (Inventor)
2005-01-01
An event-driven X-ray CCD imager device uses a floating-gate amplifier or other non-destructive readout device to non-destructively sense a charge level in a charge packet associated with a pixel. The output of the floating-gate amplifier is used to identify each pixel that has a charge level above a predetermined threshold. If the charge level is above a predetermined threshold the charge in the triggering charge packet and in the charge packets from neighboring pixels need to be measured accurately. A charge delay register is included in the event-driven X-ray CCD imager device to enable recovery of the charge packets from neighboring pixels for accurate measurement. When a charge packet reaches the end of the charge delay register, control logic either dumps the charge packet, or steers the charge packet to a charge FIFO to preserve it if the charge packet is determined to be a packet that needs accurate measurement. A floating-diffusion amplifier or other low-noise output stage device, which converts charge level to a voltage level with high precision, provides final measurement of the charge packets. The voltage level is eventually digitized by a high linearity ADC.
Event-driven visual attention for the humanoid robot iCub.
Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara
2013-01-01
Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753
Event-driven visual attention for the humanoid robot iCub
Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara
2013-01-01
Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753
Event-driven approach of layered multicast to network adaptation in RED-based IP networks
NASA Astrophysics Data System (ADS)
Nahm, Kitae; Li, Qing; Kuo, C.-C. J.
2003-11-01
In this work, we investigate the congestion control problem for layered video multicast in IP networks of active queue management (AQM) using a simple random early detection (RED) queue model. AQM support from networks improves the visual quality of video streaming but makes network adaptation more di+/-cult for existing layered video multicast proticols that use the event-driven timer-based approach. We perform a simplified analysis on the response of the RED algorithm to burst traffic. The analysis shows that the primary problem lies in the weak correlation between the network feedback and the actual network congestion status when the RED queue is driven by burst traffic. Finally, a design guideline of the layered multicast protocol is proposed to overcome this problem.
Sanchez, A; Rotstein, G; Alsop, N; Bromberg, J P; Gollain, C; Sorensen, S; Macchietto, S; Jakeman, C
2002-07-01
This paper presents the results of an academia-industry collaborative project whose main objective was to test novel techniques for the development of event-driven control systems in the batch processing (e.g., pharmaceutical, fine chemicals, food) industries. Proposed techniques build upon industrial standards and focus on (i) formal synthesis of phase control logic and its automatic translation into procedural code, and (ii) verification of the complete discrete-event control system via dynamic simulation. In order to test the techniques in an engineering environment, a complete discrete-event control system was produced for a benchmark batch process plant based on a standard development method employed by one of the industrial partners. The control system includes functional process specification, control architecture, distributed control system (DCS) proprietary programming code for procedural control at equipment, unit, and process cell levels, and human-machine interfaces: A technical assessment of the development method and the obtained control system was then carried out. Improvements were suggested using the proposed techniques in the specification, code generation and, verification steps. The project assessed the impact of these techniques from both an engineering and economic point of view. Results suggest that the introduction of computer aided engineering (CAE) practices based on the benchmarked techniques and a structured approach could effect a 75% reduction of errors produced in the development process. This translates into estimated overall savings of 7% for green-field projects. Figures were compared with other partners' experience. It is expected that the work load on a given project will shift, increasing the load on process engineers during the specification stage and decreasing the load on the software engineers during the code writing. PMID:12160348
GTOSS: Generalized Tethered Object Simulation System
NASA Technical Reports Server (NTRS)
Lang, David D.
1987-01-01
GTOSS represents a tether analysis complex which is described by addressing its family of modules. TOSS is a portable software subsystem specifically designed to be introduced into the environment of any existing vehicle dynamics simulation to add the capability of simulating multiple interacting objects (via multiple tethers). These objects may interact with each other as well as with the vehicle into whose environment TOSS is introduced. GTOSS is a stand alone tethered system analysis program, representing an example of TOSS having been married to a host simulation. RTOSS is the Results Data Base (RDB) subsystem designed to archive TOSS simulation results for future display processing. DTOSS is a display post processors designed to utilize the RDB. DTOSS extracts data from the RDB for multi-page printed time history displays. CTOSS is similar to DTOSS, but is designed to create ASCII plot files. The same time history data formats provided for DTOSS (for printing) are available via CTOSS for plotting. How these and other modules interact with each other is discussed.
Innovation of IT metasystems by means of event-driven paradigm using QDMS
NASA Astrophysics Data System (ADS)
Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Eric, Milan
2016-10-01
Globalisation of world economy brings new and more complex demands to business systems. In order to respond to these trends, business systems apply new paradigms that are inevitable reflecting on management metasystems - quality assurance (QA), as well as on information technology (IT) metasystems. Small and medium enterprises (in particular in food industry) do not have possibilities to access external resources to the extent that could provide adequate keeping up with these trends. That raises the question how to enhance synergetic effect of interaction between existing QA and IT metasystems in order to overcome resource gap and achieve set goals by internal resources. The focus of this article is to propose a methodology for utilisation of potential of quality assurance document management system (QDMS) as prototypical platform for initiating, developing, testing and improving new functionalities that are required by IT as support for buiness system management. In that way QDMS plays a role of catalyst that not only accelerates but could also enhance selectivity of the reactions of QA and IT metasystems and direct them on finding new functionalities based on event-driven paradigm. The article tries to show the process of modelling, development and implementation of a possible approach to this problem through conceptual survey and practical solution in the food industry.
A 300-mV 220-nW event-driven ADC with real-time QRS detection for wearable ECG sensors.
Zhang, Xiaoyang; Lian, Yong
2014-12-01
This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector. PMID:25608283
An event-driven distributed processing architecture for image-guided cardiac ablation therapy.
Rettmann, M E; Holmes, D R; Cameron, B M; Robb, R A
2009-08-01
Medical imaging data is becoming increasing valuable in interventional medicine, not only for preoperative planning, but also for real-time guidance during clinical procedures. Three key components necessary for image-guided intervention are real-time tracking of the surgical instrument, aligning the real-world patient space with image-space, and creating a meaningful display that integrates the tracked instrument and patient data. Issues to consider when developing image-guided intervention systems include the communication scheme, the ability to distribute CPU intensive tasks, and flexibility to allow for new technologies. In this work, we have designed a communication architecture for use in image-guided catheter ablation therapy. Communication between the system components is through a database which contains an event queue and auxiliary data tables. The communication scheme is unique in that each system component is responsible for querying and responding to relevant events from the centralized database queue. An advantage of the architecture is the flexibility to add new system components without affecting existing software code. In addition, the architecture is intrinsically distributed, in that components can run on different CPU boxes, and even different operating systems. We refer to this Framework for Image-Guided Navigation using a Distributed Event-Driven Database in Real-Time as the FINDER architecture. This architecture has been implemented for the specific application of image-guided cardiac ablation therapy. We describe our prototype image-guidance system and demonstrate its functionality by emulating a cardiac ablation procedure with a patient-specific phantom. The proposed architecture, designed to be modular, flexible, and intuitive, is a key step towards our goal of developing a complete system for visualization and targeting in image-guided cardiac ablation procedures.
Event-driven time-optimal control for a class of discontinuous bioreactors.
Moreno, Jaime A; Betancur, Manuel J; Buitrón, Germán; Moreno-Andrade, Iván
2006-07-01
Discontinuous bioreactors may be further optimized for processing inhibitory substrates using a convenient fed-batch mode. To do so the filling rate must be controlled in such a way as to push the reaction rate to its maximum value, by increasing the substrate concentration just up to the point where inhibition begins. However, an exact optimal controller requires measuring several variables (e.g., substrate concentrations in the feed and in the tank) and also good model knowledge (e.g., yield and kinetic parameters), requirements rarely satisfied in real applications. An environmentally important case, that exemplifies all these handicaps, is toxicant wastewater treatment. There the lack of online practical pollutant sensors may allow unforeseen high shock loads to be fed to the bioreactor, causing biomass inhibition that slows down the treatment process and, in extreme cases, even renders the biological process useless. In this work an event-driven time-optimal control (ED-TOC) is proposed to circumvent these limitations. We show how to detect a "there is inhibition" event by using some computable function of the available measurements. This event drives the ED-TOC to stop the filling. Later, by detecting the symmetric event, "there is no inhibition," the ED-TOC may restart the filling. A fill-react cycling then maintains the process safely hovering near its maximum reaction rate, allowing a robust and practically time-optimal operation of the bioreactor. An experimental study case of a wastewater treatment process application is presented. There the dissolved oxygen concentration was used to detect the events needed to drive the controller. PMID:16523521
Real-time gesture interface based on event-driven processing from stereo silicon retinas.
Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael; Park, Paul K J; Shin, Chang-Woo; Ryu, Hyunsurk Eric; Kang, Byung Chang
2014-12-01
We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naïve users. PMID:25420246
Event-driven time-optimal control for a class of discontinuous bioreactors.
Moreno, Jaime A; Betancur, Manuel J; Buitrón, Germán; Moreno-Andrade, Iván
2006-07-01
Discontinuous bioreactors may be further optimized for processing inhibitory substrates using a convenient fed-batch mode. To do so the filling rate must be controlled in such a way as to push the reaction rate to its maximum value, by increasing the substrate concentration just up to the point where inhibition begins. However, an exact optimal controller requires measuring several variables (e.g., substrate concentrations in the feed and in the tank) and also good model knowledge (e.g., yield and kinetic parameters), requirements rarely satisfied in real applications. An environmentally important case, that exemplifies all these handicaps, is toxicant wastewater treatment. There the lack of online practical pollutant sensors may allow unforeseen high shock loads to be fed to the bioreactor, causing biomass inhibition that slows down the treatment process and, in extreme cases, even renders the biological process useless. In this work an event-driven time-optimal control (ED-TOC) is proposed to circumvent these limitations. We show how to detect a "there is inhibition" event by using some computable function of the available measurements. This event drives the ED-TOC to stop the filling. Later, by detecting the symmetric event, "there is no inhibition," the ED-TOC may restart the filling. A fill-react cycling then maintains the process safely hovering near its maximum reaction rate, allowing a robust and practically time-optimal operation of the bioreactor. An experimental study case of a wastewater treatment process application is presented. There the dissolved oxygen concentration was used to detect the events needed to drive the controller.
Design of a general portable communication signal simulator
NASA Astrophysics Data System (ADS)
Zhu, Qinghua; Ruan, Xiaofen; Jin, Guoqing; Li, Shengyong
2009-12-01
A design scheme of a general portable communication simulator is proposed. The simulator is capable to provide VHF frequency-fixed signals, frequency-hopping signals, and spread spectrum signals. The detailed technical designs of key modules are presented, and so are their engineering implementation methods. Applying results demonstrate its capability of producing realistic communication signals as well as promising prospect of engineering applications.
General Aviation Cockpit Weather Information System Simulation Studies
NASA Technical Reports Server (NTRS)
McAdaragh, Ray; Novacek, Paul
2003-01-01
This viewgraph presentation provides information on two experiments on the effectiveness of a cockpit weather information system on a simulated general aviation flight. The presentation covers the simulation hardware configuration, the display device screen layout, a mission scenario, conclusions, and recommendations. The second experiment, with its own scenario and conclusions, is a follow-on experiment.
Sampling of general correlators in worm-algorithm based simulations
NASA Astrophysics Data System (ADS)
Rindlisbacher, Tobias; Åkerlund, Oscar; de Forcrand, Philippe
2016-08-01
Using the complex ϕ4-model as a prototype for a system which is simulated by a worm algorithm, we show that not only the charged correlator <ϕ* (x) ϕ (y) >, but also more general correlators such as < | ϕ (x) | | ϕ (y) | > or < arg (ϕ (x)) arg (ϕ (y)) >, as well as condensates like < | ϕ | >, can be measured at every step of the Monte Carlo evolution of the worm instead of on closed-worm configurations only. The method generalizes straightforwardly to other systems simulated by worms, such as spin or sigma models.
Simulations of accretion disks in pseudo-complex General Relativity
NASA Astrophysics Data System (ADS)
Hess, P. O.; Algalán B., M.; Schönenbach, T.; Greiner, W.
2015-11-01
After a summary on pseudo-complex General Relativity (pc-GR), circular orbits and stable orbits in general are discussed, including predictions compared to observations. Using a modified version of a model for accretions disks, presented by Page and Thorne in 1974, we apply the raytracing technique in order to simulate the appearance of an accretion disk as it should be observed in a detector. In pc-GR we predict a dark ring near a very massive, rapidly rotating object.
General purpose symbolic simulation tools for electric networks
Alvarado, F.L.; Lui, Y.
1988-05-01
This paper presents research results on the use of computers to solve simulation problems in a way closer to human thinking. With the aid of techniques in AI (Artificial Intelligence), DB (Data Base Systems), and Computer Graphics, a set of general purpose LISP and PASCAL based simulation tools have been developed. Each of these tools solves a specific problem in some stage of simulation. Rule-based and object-oriented symbolic manipulations are extensively used. The tools provide more powerful and accurate modelling capability for complex objects, and permit simplicity and flexibility in implementation. The tools are used to study electrical transient problems, optimal load flow problems, linear control systems, and other simulation problems.
The power of event-driven analytics in Large Scale Data Processing
None
2016-07-12
FeedZai is a software company specialized in creating high--throughput low--latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event--driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real--time web--based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top--20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large--scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open--source and can be licensed both for non--commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high--volume low--latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is
The power of event-driven analytics in Large Scale Data Processing
2011-02-24
FeedZai is a software company specialized in creating high--throughput low--latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event--driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real--time web--based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top--20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large--scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open--source and can be licensed both for non--commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high--volume low--latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is
Applications of a general thermal/hydraulic simulation tool
NASA Technical Reports Server (NTRS)
Cullimore, B. A.
1989-01-01
The analytic techniques, sample applications, and development status of a general-purpose computer program called SINDA '85/FLUINT (for systems improved numerical differencing analyzer, 1985 version with fluid integrator), designed for simulating thermal structures and internal fluid systems, are described, with special attention given to the applications of the fluid system capabilities. The underlying assumptions, methodologies, and modeling capabilities of the system are discussed. Sample applications include component-level and system-level simulations. A system-level analysis of a cryogenic storage system is presented.
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Leclair, Andre; Moore, Ric; Schallhorn, Paul
2011-01-01
GFSSP stands for Generalized Fluid System Simulation Program. It is a general-purpose computer program to compute pressure, temperature and flow distribution in a flow network. GFSSP calculates pressure, temperature, and concentrations at nodes and calculates flow rates through branches. It was primarily developed to analyze Internal Flow Analysis of a Turbopump Transient Flow Analysis of a Propulsion System. GFSSP development started in 1994 with an objective to provide a generalized and easy to use flow analysis tool for thermo-fluid systems.
A General Mission Independent Simulator (GMIS) and Simulator Control Program (SCP)
NASA Technical Reports Server (NTRS)
Baker, Paul L.; Moore, J. Michael; Rosenberger, John
1994-01-01
GMIS is a general-purpose simulator for testing ground system software. GMIS can be adapted to any mission to simulate changes in the data state maintained by the mission's computers. GMIS was developed in Code 522 NASA Goddard Space Flight Center. The acronym GMIS stands for GOTT Mission Independent Simulator, where GOTT is the Ground Operations Technology Testbed. Within GOTT, GMIS is used to provide simulated data to an installation of TPOCC - the Transportable Payload Operations Control Center. TPOCC was developed by Code 510 as a reusable control center. GOTT uses GMIS and TPOCC to test new technology and new operator procedures.
NASA Technical Reports Server (NTRS)
Kimble, Randy A.; Pain, Bedabrata; Norton, Timothy J.; Haas, J. Patrick; Oegerle, William R. (Technical Monitor)
2002-01-01
Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest of by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.
NASA Technical Reports Server (NTRS)
Kimble, Randy A.; Pain, B.; Norton, T. J.; Haas, P.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution for the readout while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest or by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.
Comparison of Cenozoic atmospheric general circulation model simulations
Barron, E.J.
1985-01-01
Paleocene, Eocene, Miocene and present day (with polar ice) geography are specified as the lower boundary condition in a mean annual, energy balance ocean version of the Community Climate Model (CCM), a spectral General Circulation Model of the Atmosphere developed at the National Center for Atmospheric Research. This version of the CCM has a 4.5/sup 0/ latitudinal and 7.5/sup 0/ longitudinal resolution with 9 vertical levels and includes predictions for pressure, winds, temperature, evaporation, precipitation, cloud cover, snow cover and sea ice. The model simulations indicate little geographically-induced climates changes from the Paleocene to the Miocene, but substantial differences between the Miocene and the present simulations. The simulated climate differences between the Miocene and present day include: 1) cooler present temperatures (2/sup 0/C in tropics, 15-35 C in polar latitudes) with the exception of warmer subtropical desert conditions, 2) a generally weaker present hydrologic cycle, with greater subtropical aridity, 3) strengthened present day westerly jets with a slight poleward displacement, and 4) the largest regional climate changes associated with Antarctica. The results of the climate model sensitivity experiments have considerable implications for understanding how geography influences climate.
Dinelle, Katie; Cheng, Ju-Chieh; Shilov, Mikhail A.; Segars, William P.; Lidstone, Sarah C.; Blinder, Stephan; Rousset, Olivier G.; Vajihollahi, Hamid; Tsui, Benjamin M. W.; Wong, Dean F.; Sossi, Vesna
2010-01-01
With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented. PMID:18672420
Automatic CT simulation optimization for radiation therapy: A general strategy
Li, Hua Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M.; Mutic, Sasa; Yu, Lifeng; Anastasio, Mark A.; Low, Daniel A.
2014-03-15
Purpose: In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. Methods: The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Results: Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube
The Speedster-EXD- A New Event-Driven Hybrid CMOS X-ray Detector
NASA Astrophysics Data System (ADS)
Griffith, Christopher V.; Falcone, Abraham D.; Prieskorn, Zachary R.; Burrows, David N.
2016-01-01
The Speedster-EXD is a new 64×64 pixel, 40-μm pixel pitch, 100-μm depletion depth hybrid CMOS x-ray detector with the capability of reading out only those pixels containing event charge, thus enabling fast effective frame rates. A global charge threshold can be specified, and pixels containing charge above this threshold are flagged and read out. The Speedster detector has also been designed with other advanced in-pixel features to improve performance, including a low-noise, high-gain capacitive transimpedance amplifier that eliminates interpixel capacitance crosstalk (IPC), and in-pixel correlated double sampling subtraction to reduce reset noise. We measure the best energy resolution on the Speedster-EXD detector to be 206 eV (3.5%) at 5.89 keV and 172 eV (10.0%) at 1.49 keV. The average IPC to the four adjacent pixels is measured to be 0.25%±0.2% (i.e., consistent with zero). The pixel-to-pixel gain variation is measured to be 0.80%±0.03%, and a Monte Carlo simulation is applied to better characterize the contributions to the energy resolution.
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
Solute transport processes in flow-event-driven stream-aquifer interaction
NASA Astrophysics Data System (ADS)
Xie, Yueqing; Cook, Peter G.; Simmons, Craig T.
2016-07-01
The interaction between streams and groundwater controls key features of the stream hydrograph and chemograph. Since surface runoff is usually less saline than groundwater, flow events are usually accompanied by declines in stream salinity. In this paper, we use numerical modelling to show that, at any particular monitoring location: (i) the increase in stream stage associated with a flow event will precede the decrease in solute concentration (arrival time lag for solutes); and (ii) the decrease in stream stage following the flow peak will usually precede the subsequent return (increase) in solute concentration (return time lag). Both arrival time lag and return time lag increase with increasing wave duration. However, arrival time lag decreases with increasing wave amplitude, whereas return time lag increases. Furthermore, while arrival time lag is most sensitive to parameters that control river velocity (channel roughness and stream slope), return time lag is most sensitive to groundwater parameters (aquifer hydraulic conductivity, recharge rate, and dispersitivity). Additionally, the absolute magnitude of the decrease in river concentration is sensitive to both river and groundwater parameters. Our simulations also show that in-stream mixing is dominated by wave propagation and bank storage processes, and in-stream dispersion has a relatively minor effect on solute concentrations. This has important implications for spreading of contaminants released to streams. Our work also demonstrates that a high contribution of pre-event water (or groundwater) within the flow hydrograph can be caused by the combination of in-stream and bank storage exchange processes, and does not require transport of pre-event water through the catchment.
Characterization and development of an event-driven hybrid CMOS x-ray detector
NASA Astrophysics Data System (ADS)
Griffith, Christopher
2015-06-01
Hybrid CMOS detectors (HCD) have provided great benefit to the infrared and optical fields of astronomy, and they are poised to do the same for X-ray astronomy. Infrared HCDs have already flown on the Hubble Space Telescope and the Wide-Field Infrared Survey Explorer (WISE) mission and are slated to fly on the James Webb Space Telescope (JWST). Hybrid CMOS X-ray detectors offer low susceptibility to radiation damage, low power consumption, and fast readout time to avoid pile-up. The fast readout time is necessary for future high throughput X-ray missions. The Speedster-EXD X-ray HCD presented in this dissertation offers new in-pixel features and reduces known noise sources seen on previous generation HCDs. The Speedster-EXD detector makes a great step forward in the development of these detectors for future space missions. This dissertation begins with an overview of future X-ray space mission concepts and their detector requirements. The background on the physics of semiconductor devices and an explanation of the detection of X-rays with these devices will be discussed followed by a discussion on CCDs and CMOS detectors. Next, hybrid CMOS X-ray detectors will be explained including their advantages and disadvantages. The Speedster-EXD detector and its new features will be outlined including its ability to only read out pixels which contain X-ray events. Test stand design and construction for the Speedster-EXD detector is outlined and the characterization of each parameter on two Speedster-EXD detectors is detailed including read noise, dark current, interpixel capacitance crosstalk (IPC), and energy resolution. Gain variation is also characterized, and a Monte Carlo simulation of its impact on energy resolution is described. This analysis shows that its effect can be successfully nullified with proper calibration, which would be important for a flight mission. Appendix B contains a study of the extreme tidal disruption event, Swift J1644+57, to search for
An event-driven approach for studying gene block evolution in bacteria
Ream, David C.; Bankapur, Asma R.; Friedberg, Iddo
2015-01-01
Motivation: Gene blocks are genes co-located on the chromosome. In many cases, gene blocks are conserved between bacterial species, sometimes as operons, when genes are co-transcribed. The conservation is rarely absolute: gene loss, gain, duplication, block splitting and block fusion are frequently observed. An open question in bacterial molecular evolution is that of the formation and breakup of gene blocks, for which several models have been proposed. These models, however, are not generally applicable to all types of gene blocks, and consequently cannot be used to broadly compare and study gene block evolution. To address this problem, we introduce an event-based method for tracking gene block evolution in bacteria. Results: We show here that the evolution of gene blocks in proteobacteria can be described by a small set of events. Those include the insertion of genes into, or the splitting of genes out of a gene block, gene loss, and gene duplication. We show how the event-based method of gene block evolution allows us to determine the evolutionary rateand may be used to trace the ancestral states of their formation. We conclude that the event-based method can be used to help us understand the formation of these important bacterial genomic structures. Availability and implementation: The software is available under GPLv3 license on http://github.com/reamdc1/gene_block_evolution.git. Supplementary online material: http://iddo-friedberg.net/operon-evolution Contact: i.friedberg@miamioh.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25717195
A generalized Poisson solver for first-principles device simulations
NASA Astrophysics Data System (ADS)
Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost
2016-01-01
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.
A generalized Poisson solver for first-principles device simulations.
Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated. PMID:26827208
Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon
2014-01-01
One of the largest continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available. Approaches used in Earth science research such as case study analysis and climatology studies involve gathering discovering and gathering diverse data sets and information to support the research goals. Research based on case studies involves a detailed description of specific weather events using data from different sources, to characterize physical processes in play for a specific event. Climatology-based research tends to focus on the representativeness of a given event, by studying the characteristics and distribution of a large number of events. This allows researchers to generalize characteristics such as spatio-temporal distribution, intensity, annual cycle, duration, etc. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the datasets of interest can obtain the specific files they need using these systems. However, in cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. In these cases, a search process needs to be organized around the event rather than observing instruments. In addition, the existing data systems assume users have sufficient knowledge regarding the domain vocabulary to be able to effectively utilize their catalogs. These systems do not support new or interdisciplinary researchers who may be unfamiliar with the domain terminology. This paper presents a specialized search, aggregation and curation tool for Earth science to address these existing
ERIC Educational Resources Information Center
Apker, Wesley
This school district utilized the generalized academic simulation programs (GASP) to assist in making decisions regarding the kinds of facilities that should be constructed at Pilchuck Senior High School. Modular scheduling was one of the basic educational parameters used in determining the number and type of facilities needed. The objectives of…
A General Simulator for Reaction-Based Biogeochemical Processes
Fang, Yilin; Yabusaki, Steven B.; Yeh, George
2006-02-01
As more complex biogeochemical situations are being investigated (e.g., evolving reactivity, passivation of reactive surfaces, dissolution of sorbates), there is a growing need for biogeochemical simulators to flexibly and facilely address new reaction forms and rate laws. This paper presents an approach that accommodates this need to efficiently simulate general biogeochemical processes, while insulating the user from additional code development. The approach allows for the automatic extraction of fundamental reaction stoichiometry and thermodynamics from a standard chemistry database, and the symbolic entry of arbitrarily complex user-specified reaction forms, rate laws, and equilibria. The user-specified equilibrium and kinetic reactions (i.e., reactions not defined in the format of the standardized database) are interpreted by the Maple symbolic mathematical software package. FORTRAN 90 code is then generated by Maple for (1) the analytical Jacobian matrix (if preferred over the numerical Jacobian matrix) used in the Newton-Raphson solution procedure, and (2) the residual functions for user-specified equilibrium expressions and rate laws. Matrix diagonalization eliminates the need to conceptualize the system of reactions as a tableau, while identifying a minimum rank set of basis species with enhanced numerical convergence properties. The newly generated code, which is designed to operate in the BIOGEOCHEM biogeochemical simulator, is then compiled and linked into the BIOGEOCHEM executable. With these features, users can avoid recoding the simulator to accept new equilibrium expressions or kinetic rate laws, while still taking full advantage of the stoichiometry and thermodynamics provided by an existing chemical database. Thus, the approach introduces efficiencies in the specification of biogeochemical reaction networks and eliminates opportunities for mistakes in preparing input files and coding errors. Test problems are used to demonstrate the features of
Martínez-Espronceda, Miguel; Trigo, Jesús D; Led, Santiago; Barrón-González, H Gilberto; Redondo, Javier; Baquero, Alfonso; Serrano, Luis
2014-11-01
Experiences applying standards in personal health devices (PHDs) show an inherent trade-off between interoperability and costs (in terms of processing load and development time). Therefore, reducing hardware and software costs as well as time-to-market is crucial for standards adoption. The ISO/IEEE11073 PHD family of standards (also referred to as X73PHD) provides interoperable communication between PHDs and aggregators. Nevertheless, the responsibility of achieving inexpensive implementations of X73PHD in limited resource microcontrollers falls directly on the developer. Hence, the authors previously presented a methodology based on patterns to implement X73-compliant PHDs into devices with low-voltage low-power constraints. That version was based on multitasking, which required additional features and resources. This paper therefore presents an event-driven evolution of the patterns-based methodology for cost-effective development of standardized PHDs. The results of comparing between the two versions showed that the mean values of decrease in memory consumption and cycles of latency are 11.59% and 45.95%, respectively. In addition, several enhancements in terms of cost-effectiveness and development time can be derived from the new version of the methodology. Therefore, the new approach could help in producing cost-effective X73-compliant PHDs, which in turn could foster the adoption of standards.
Tsui, Fu-Chiang; Espino, Jeremy U; Weng, Yan; Choudary, Arvinder; Su, Hoah-Der; Wagner, Michael M
2005-01-01
The National Retail Data Monitor (NRDM) has monitored over-the-counter (OTC) medication sales in the United States since December 2002. The NRDM collects data from over 18,600 retail stores and processes over 0.6 million sales records per day. This paper describes key architectural features that we have found necessary for a data utility component in a national biosurveillance system. These elements include event-driven architecture to provide analyses of data in near real time, multiple levels of caching to improve query response time, high availability through the use of clustered servers, scalable data storage through the use of storage area networks and a web-service function for interoperation with affiliated systems. The methods and architectural principles are relevant to the design of any production data utility for public health surveillance-systems that collect data from multiple sources in near real time for use by analytic programs and user interfaces that have substantial requirements for time-series data aggregated in multiple dimensions.
General Relativistic Simulations of Magnetized Binary Neutron Stars
NASA Astrophysics Data System (ADS)
Giacomazzo, Bruno
2011-04-01
Binary neutron stars are among the most important sources of gravitational waves which are expected to be detected by the current or next generation of gravitational wave detectors, such as LIGO and Virgo, and they are also thought to be at the origin of very important astrophysical phenomena, such as short gamma-ray bursts. I will report on some recent results obtained using the fully general relativistic magnetohydrodynamic code Whisky in simulating equal-mass binary neutron star systems during the last phases of inspiral, merger and collapse to black hole surrounded by a torus. I will in particular describe how magnetic fields can affect the gravitational wave signal emitted by these sources and their possible role in powering short gamma-ray bursts.
Amyloid oligomer structure characterization from simulations: A general method
Nguyen, Phuong H.; Li, Mai Suan
2014-03-07
Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.
Amyloid oligomer structure characterization from simulations: A general method
NASA Astrophysics Data System (ADS)
Nguyen, Phuong H.; Li, Mai Suan; Derreumaux, Philippe
2014-03-01
Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ9-40, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.
Magnetohydrodynamical simulations of a deep tidal disruption in general relativity
NASA Astrophysics Data System (ADS)
Sądowski, Aleksander; Tejeda, Emilio; Gafton, Emanuel; Rosswog, Stephan; Abarca, David
2016-06-01
We perform hydro- and magnetohydrodynamical general-relativistic simulations of a tidal disruption of a 0.1 M⊙ red dwarf approaching a 105 M⊙ non-rotating massive black hole on a close (impact parameter β = 10) elliptical (eccentricity e = 0.97) orbit. We track the debris self-interaction, circularization and the accompanying accretion through the black hole horizon. We find that the relativistic precession leads to the formation of a self-crossing shock. The dissipated kinetic energy heats up the incoming debris and efficiently generates a quasi-spherical outflow. The self-interaction is modulated because of the feedback exerted by the flow on itself. The debris quickly forms a thick, almost marginally bound disc that remains turbulent for many orbital periods. Initially, the accretion through the black hole horizon results from the self-interaction, while in the later stages it is dominated by the debris originally ejected in the shocked region, as it gradually falls back towards the hole. The effective viscosity in the debris disc stems from the original hydrodynamical turbulence, which dominates over the magnetic component. The radiative efficiency is very low because of low energetics of the gas crossing the horizon and large optical depth that results in photon trapping. Although the parameters of the simulated tidal disruption are probably not representative of most observed events, it is possible to extrapolate some of its properties towards more common configurations.
Generalized Fluid System Simulation Program, Version 5.0-Educational
NASA Technical Reports Server (NTRS)
Majumdar, A. K.
2011-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the point, drag and click method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids and 21 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 12 demonstrated example problems.
Generalized Fluid System Simulation Program (GFSSP) - Version 6
NASA Technical Reports Server (NTRS)
Majumdar, Alok; LeClair, Andre; Moore, Ric; Schallhorn, Paul
2015-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, flow control valves and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. Users can introduce new physics, non-linear and time-dependent boundary conditions through user-subroutine.
Generalized Fluid System Simulation Program, Version 6.0
NASA Technical Reports Server (NTRS)
Majumdar, A. K.; LeClair, A. C.; Moore, A.; Schallhorn, P. A.
2013-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependant flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 25 demonstrated example problems.
Flow simulation system for generalized static and dynamic grids
NASA Astrophysics Data System (ADS)
Koomullil, Roy Paulose
The objective of this study is to develop a flow simulation system using generalized grids that can be used on static geometries and on dynamically moving bodies. In a generalized grid, the physical domain of interest is decomposed into cells with arbitrary number of sides. The grid can be structured, unstructured, hanging node type, or a combination of the above. An edge-based data structure is used to store the grid information. This makes it easier to handle cells with any number of sides. The full Navier-Stokes equations, in the integral form, are taken as the relations that govern the fluid flow. A cell centered finite volume scheme is used for solving the governing equations. The numerical flux across the cell faces is calculated by an upwind scheme based on Roe's approximate Riemann solver. Taylor's series expansion of a function of multiple variables together with Green's theorem is used for the linear reconstruction of the conserved variables. The accuracy of the computations with first order and higher order schemes are compared. Limiter functions are used to preserve monotonocity and the effect of two different limiter functions on the convergence history is studied. Skin friction coefficient is used to study the accuracy of the limiter functions. Explicit and implicit schemes are implemented and the Generalized Minimal Residual (GMRES) method is used to solve the sparse linear system of equations resulting from the implicit scheme. The flux Jacobians for the implicit schemes are evaluated either using an approximate analytical method or numerical differentiation procedure. The effect of these Jacobians on the convergence of the solution to steady state is compared. Boundary conditions based on the characteristic variables are implemented for generalized grids. The viscous fluxes are evaluated explicitly. Spalart-Almaras one equation turbulence model is implemented for hybrid grids to evaluate the turbulent viscosity. For dynamically moving bodies, the
Hospitable archean climates simulated by a general circulation model.
Wolf, E T; Toon, O B
2013-07-01
Evidence from ancient sediments indicates that liquid water and primitive life were present during the Archean despite the faint young Sun. To date, studies of Archean climate typically utilize simplified one-dimensional models that ignore clouds and ice. Here, we use an atmospheric general circulation model coupled to a mixed-layer ocean model to simulate the climate circa 2.8 billion years ago when the Sun was 20% dimmer than it is today. Surface properties are assumed to be equal to those of the present day, while ocean heat transport varies as a function of sea ice extent. Present climate is duplicated with 0.06 bar of CO2 or alternatively with 0.02 bar of CO2 and 0.001 bar of CH4. Hot Archean climates, as implied by some isotopic reconstructions of ancient marine cherts, are unattainable even in our warmest simulation having 0.2 bar of CO2 and 0.001 bar of CH4. However, cooler climates with significant polar ice, but still dominated by open ocean, can be maintained with modest greenhouse gas amounts, posing no contradiction with CO2 constraints deduced from paleosols or with practical limitations on CH4 due to the formation of optically thick organic hazes. Our results indicate that a weak version of the faint young Sun paradox, requiring only that some portion of the planet's surface maintain liquid water, may be resolved with moderate greenhouse gas inventories. Thus, hospitable late Archean climates are easily obtained in our climate model.
Hospitable archean climates simulated by a general circulation model.
Wolf, E T; Toon, O B
2013-07-01
Evidence from ancient sediments indicates that liquid water and primitive life were present during the Archean despite the faint young Sun. To date, studies of Archean climate typically utilize simplified one-dimensional models that ignore clouds and ice. Here, we use an atmospheric general circulation model coupled to a mixed-layer ocean model to simulate the climate circa 2.8 billion years ago when the Sun was 20% dimmer than it is today. Surface properties are assumed to be equal to those of the present day, while ocean heat transport varies as a function of sea ice extent. Present climate is duplicated with 0.06 bar of CO2 or alternatively with 0.02 bar of CO2 and 0.001 bar of CH4. Hot Archean climates, as implied by some isotopic reconstructions of ancient marine cherts, are unattainable even in our warmest simulation having 0.2 bar of CO2 and 0.001 bar of CH4. However, cooler climates with significant polar ice, but still dominated by open ocean, can be maintained with modest greenhouse gas amounts, posing no contradiction with CO2 constraints deduced from paleosols or with practical limitations on CH4 due to the formation of optically thick organic hazes. Our results indicate that a weak version of the faint young Sun paradox, requiring only that some portion of the planet's surface maintain liquid water, may be resolved with moderate greenhouse gas inventories. Thus, hospitable late Archean climates are easily obtained in our climate model. PMID:23808659
Extension of Generalized Fluid System Simulation Program's Fluid Property Database
NASA Technical Reports Server (NTRS)
Patel, Kishan
2011-01-01
This internship focused on the development of additional capabilities for the General Fluid Systems Simulation Program (GFSSP). GFSSP is a thermo-fluid code used to evaluate system performance by a finite volume-based network analysis method. The program was developed primarily to analyze the complex internal flow of propulsion systems and is capable of solving many problems related to thermodynamics and fluid mechanics. GFSSP is integrated with thermodynamic programs that provide fluid properties for sub-cooled, superheated, and saturation states. For fluids that are not included in the thermodynamic property program, look-up property tables can be provided. The look-up property tables of the current release version can only handle sub-cooled and superheated states. The primary purpose of the internship was to extend the look-up tables to handle saturated states. This involves a) generation of a property table using REFPROP, a thermodynamic property program that is widely used, and b) modifications of the Fortran source code to read in an additional property table containing saturation data for both saturated liquid and saturated vapor states. Also, a method was implemented to calculate the thermodynamic properties of user-fluids within the saturation region, given values of pressure and enthalpy. These additions required new code to be written, and older code had to be adjusted to accommodate the new capabilities. Ultimately, the changes will lead to the incorporation of this new capability in future versions of GFSSP. This paper describes the development and validation of the new capability.
Sensitivity simulations of superparameterised convection in a general circulation model
NASA Astrophysics Data System (ADS)
Rybka, Harald; Tost, Holger
2015-04-01
Cloud Resolving Models (CRMs) covering a horizontal grid spacing from a few hundred meters up to a few kilometers have been used to explicitly resolve small-scale and mesoscale processes. Special attention has been paid to realistically represent cloud dynamics and cloud microphysics involving cloud droplets, ice crystals, graupel and aerosols. The entire variety of physical processes on the small-scale interacts with the larger-scale circulation and has to be parameterised on the coarse grid of a general circulation model (GCM). Since more than a decade an approach to connect these two types of models which act on different scales has been developed to resolve cloud processes and their interactions with the large-scale flow. The concept is to use an ensemble of CRM grid cells in a 2D or 3D configuration in each grid cell of the GCM to explicitly represent small-scale processes avoiding the use of convection and large-scale cloud parameterisations which are a major source for uncertainties regarding clouds. The idea is commonly known as superparameterisation or cloud-resolving convection parameterisation. This study presents different simulations of an adapted Earth System Model (ESM) connected to a CRM which acts as a superparameterisation. Simulations have been performed with the ECHAM/MESSy atmospheric chemistry (EMAC) model comparing conventional GCM runs (including convection and large-scale cloud parameterisations) with the improved superparameterised EMAC (SP-EMAC) modeling one year with prescribed sea surface temperatures and sea ice content. The sensitivity of atmospheric temperature, precipiation patterns, cloud amount and types is observed changing the embedded CRM represenation (orientation, width, no. of CRM cells, 2D vs. 3D). Additionally, we also evaluate the radiation balance with the new model configuration, and systematically analyse the impact of tunable parameters on the radiation budget and hydrological cycle. Furthermore, the subgrid
Generalized Fluid System Simulation Program, Version 6.0
NASA Technical Reports Server (NTRS)
Majumdar, A. K.; LeClair, A. C.; Moore, R.; Schallhorn, P. A.
2016-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a general purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. Two thermodynamic property programs (GASP/WASP and GASPAK) provide required thermodynamic and thermophysical properties for 36 fluids: helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, parahydrogen, water, kerosene (RP-1), isobutene, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, R-11, R-12, R-22, R-32, R-123, R-124, R-125, R-134A, R-152A, nitrogen trifluoride, ammonia, hydrogen peroxide, and air. The program also provides the options of using any incompressible fluid with constant density and viscosity or ideal gas. The users can also supply property tables for fluids that are not in the library. Twenty-four different resistance/source options are provided for modeling momentum sources or sinks in the branches. These options include pipe flow, flow through a restriction, noncircular duct, pipe flow with entrance and/or exit losses, thin sharp orifice, thick orifice, square edge reduction, square edge expansion, rotating annular duct, rotating radial duct
NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R827169)
General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...
NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R825792)
General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...
Generalized simulation technique for turbojet engine system analysis
NASA Technical Reports Server (NTRS)
Seldner, K.; Mihaloew, J. R.; Blaha, R. J.
1972-01-01
A nonlinear analog simulation of a turbojet engine was developed. The purpose of the study was to establish simulation techniques applicable to propulsion system dynamics and controls research. A schematic model was derived from a physical description of a J85-13 turbojet engine. Basic conservation equations were applied to each component along with their individual performance characteristics to derive a mathematical representation. The simulation was mechanized on an analog computer. The simulation was verified in both steady-state and dynamic modes by comparing analytical results with experimental data obtained from tests performed at the Lewis Research Center with a J85-13 engine. In addition, comparison was also made with performance data obtained from the engine manufacturer. The comparisons established the validity of the simulation technique.
Computer simulation of a general purpose satellite modem
NASA Astrophysics Data System (ADS)
Montgomery, William L., Jr.
1992-12-01
The performance of a digital phase shift keyed satellite modem was modeled and simulated. The probability of bit error (P(sub b)) at different levels of energy per bit to noise power ratio (E( sub b)/N(sub o)) was the performance measure. The channel was assumed to contribute only additive white Gaussian noise. A second order Costas loop performs demodulation in the modem and was the key part of the simulation. The Costas loop with second order Butterworth arm filters was tested by finding the response to a phase or frequency step. The Costas loop response was found to be in agreement with theoretical predictions in the absence of noise. Finally, the effect on P(sub b) of a rate 1/2 constraint length 7 convolutional code with eight level soft Viterbi decoding was demonstrated by the simulation. The simulation results were within 0.7 dB of theoretical. All computer simulations were done at baseband to reduce simulation times. The Monte Carlo error counting technique was used to estimate P(sub b). The effect of increasing the samples per bit in the simulation was demonstrated by the 0.4 dB improvement in P(sub b) caused by doubling the number of samples.
AFES (Atmospheric general circulation model For the Earth Simulator) simulation for Venus
NASA Astrophysics Data System (ADS)
Sugimoto, Norihiko; Imamura, Takeshi; Takagi, Masahiro; Matsuda, Yoshihisa; Ando, Hiroki; Kashimura, Hiroki; Ohfuchi, Wataru; Enomoto, Takeshi; Takahashi, Yoshiyuki O.; Hayashi, Yoshi-Yuki
We have developed an atmospheric general circulation model (AGCM) for Venus on the basis of AFES (AGCM For the Earth Simulator) and performed a very high-resolution simulation. The highest model resolution is T159L120; 0.75 degree times 0.75 degree latitude and longitude grids with 120 vertical layers (Δz is about 1 km). In the model, the atmosphere is dry and forced by the solar heating with the diurnal change and Newtonian cooling that relaxes the temperature to the zonally uniform basic temperature which has a virtual static stability of Venus with almost neutral layers. A fast zonal wind in a solid-body rotation is given as the initial state. In this paper, we will report several results newly obtained by this model. 1. Baroclinic instability appears in the cloud layer with small static stability and large vertical shear of the zonal flow. 2. Polar vortex is self-consistently generated by barotropic instability whose horizontal and vertical structure is consistent with the previous observations. 3. Kinetic energy spectra decreases by -5/3 power law in a range from wavenumber 4 to 45, whose range is different from that on Earth. Finally, we are now constructing the accurate radiation model of the Venus atmosphere.
General approach to boat simulation in virtual reality systems
NASA Astrophysics Data System (ADS)
Aranov, Vladislav Y.; Belyaev, Sergey Y.
2002-02-01
The paper is dedicated to real time simulation of sport boats, particularly a kayak and high-speed skimming boat, for training goals. This training is issue of the day, since kayaking and riding a high-speed skimming boat are both extreme sports. Participating in such types of competitions puts sportsmen into danger, particularly due to rapids, waterfalls, different water streams, and other obstacles. In order to make the simulation realistic, it is necessary to calculate data for at least 30 frames per second. These calculations may take not more than 5% CPU time, because very time-consuming 3D rendering process takes the rest - 95% CPU time. This paper describes an approach for creating minimal boat simulator models that satisfy the mentioned requirements. Besides, this approach can be used for other watercraft models of this kind.
Projectile General Motion in a Vacuum and a Spreadsheet Simulation
ERIC Educational Resources Information Center
Benacka, Jan
2015-01-01
This paper gives the solution and analysis of projectile motion in a vacuum if the launch and impact heights are not equal. Formulas for the maximum horizontal range and the corresponding angle are derived. An Excel application that simulates the motion is also presented, and the result of an experiment in which 38 secondary school students…
Verifying Algorithms for Autonomous Aircraft by Simulation Generalities and Example
NASA Technical Reports Server (NTRS)
White, Allan L.
2010-01-01
An open question in Air Traffic Management is what procedures can be validated by simulation where the simulation shows that the probability of undesirable events is below the required level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first examines the probabilistic interpretation of a typical requirement by a regulatory agency and computes the number of trials needed to establish the requirement at an equivalent confidence level. Since any simulation is likely to consider only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. The paper establishes a separation algorithm at the required confidence level where the aircraft operates under feedback control as is subject to perturbations. There is a discussion where it is shown that a scenario three of four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.
SimulaTEM: multislice simulations for general objects.
Gómez-Rodríguez, A; Beltrán-Del-Río, L M; Herrera-Becerra, R
2010-01-01
In this work we present the program SimulaTEM for the simulation of high resolution micrographs and diffraction patterns. This is a program based on the multislice approach that does not assume a periodic object. It can calculate images from finite objects, from amorphous samples, from crystals, quasicrystals, grain boundaries, nanoparticles or arbitrary objects provided the coordinates of all the atoms can be supplied.
GENERAL REQUIREMENTS FOR SIMULATION MODELS IN WASTE MANAGEMENT
Miller, Ian; Kossik, Rick; Voss, Charlie
2003-02-27
Most waste management activities are decided upon and carried out in a public or semi-public arena, typically involving the waste management organization, one or more regulators, and often other stakeholders and members of the public. In these environments, simulation modeling can be a powerful tool in reaching a consensus on the best path forward, but only if the models that are developed are understood and accepted by all of the parties involved. These requirements for understanding and acceptance of the models constrain the appropriate software and model development procedures that are employed. This paper discusses requirements for both simulation software and for the models that are developed using the software. Requirements for the software include transparency, accessibility, flexibility, extensibility, quality assurance, ability to do discrete and/or continuous simulation, and efficiency. Requirements for the models that are developed include traceability, transparency, credibility/validity, and quality control. The paper discusses these requirements with specific reference to the requirements for performance assessment models that are used for predicting the long-term safety of waste disposal facilities, such as the proposed Yucca Mountain repository.
A General Simulator for Acid-Base Titrations
NASA Astrophysics Data System (ADS)
de Levie, Robert
1999-07-01
General formal expressions are provided to facilitate the automatic computer calculation of acid-base titration curves of arbitrary mixtures of acids, bases, and salts, without and with activity corrections based on the Davies equation. Explicit relations are also given for the buffer strength of mixtures of acids, bases, and salts.
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
BIRD: A general interface for sparse distributed memory simulators
NASA Technical Reports Server (NTRS)
Rogers, David
1990-01-01
Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.
Plasma Jet Simulations Using a Generalized Ohm's Law
NASA Technical Reports Server (NTRS)
Ebersohn, Frans; Shebalin, John V.; Girimaji, Sharath S.
2012-01-01
Plasma jets are important physical phenomena in astrophysics and plasma propulsion devices. A currently proposed dual jet plasma propulsion device to be used for ISS experiments strongly resembles a coronal loop and further draws a parallel between these physical systems [1]. To study plasma jets we use numerical methods that solve the compressible MHD equations using the generalized Ohm s law [2]. Here, we will discuss the crucial underlying physics of these systems along with the numerical procedures we utilize to study them. Recent results from our numerical experiments will be presented and discussed.
Optimal generalized multistep integration formulae for real-time digital simulation
NASA Technical Reports Server (NTRS)
Moerder, D. D.; Halyo, N.
1985-01-01
The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.
GOOSE, a generalized object-oriented simulation environment
Ford, C.E.; March-Leuba, C. ); Guimaraes, L.; Ugolini, D. . Dept. of Nuclear Engineering)
1991-01-01
GOOSE, prototype software for a fully interactive, object-oriented simulation environment, is being developed as part of the Advanced Controls Program at Oak Ridge National Laboratory. Dynamic models may easily be constructed and tested; fully interactive capabilities allow the user to alter model parameters and complexity without recompilation. This environment provides access to powerful tools, such as numerical integration packages, graphical displays, and online help. Portability has been an important design goal; the system was written in Objective-C in order to run on a wide variety of computers and operating systems, including UNIX workstations and personal computers. A detailed library of nuclear reactor components, currently under development, will also be described. 5 refs., 4 figs.
Synchronization of autonomous objects in discrete event simulation
NASA Technical Reports Server (NTRS)
Rogers, Ralph V.
1990-01-01
Autonomous objects in event-driven discrete event simulation offer the potential to combine the freedom of unrestricted movement and positional accuracy through Euclidean space of time-driven models with the computational efficiency of event-driven simulation. The principal challenge to autonomous object implementation is object synchronization. The concept of a spatial blackboard is offered as a potential methodology for synchronization. The issues facing implementation of a spatial blackboard are outlined and discussed.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D. Kühn, Oliver
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.
An intelligent interactive simulator of clinical reasoning in general surgery.
Wang, S.; el Ayeb, B.; Echavé, V.; Preiss, B.
1993-01-01
We introduce an interactive computer environment for teaching in general surgery and for diagnostic assistance. The environment consists of a knowledge-based system coupled with an intelligent interface that allows users to acquire conceptual knowledge and clinical reasoning techniques. Knowledge is represented internally within a probabilistic framework and externally through a interface inspired by Concept Graphics. Given a set of symptoms, the internal knowledge framework computes the most probable set of diseases as well as best alternatives. The interface displays CGs illustrating the results and prompting essential facts of a medical situation or a process. The system is then ready to receive additional information or to suggest further investigation. Based on the new information, the system will narrow the solutions with increased belief coefficients. PMID:8130508
General Relativistic Simulations of Binary Neutron Star Mergers
NASA Astrophysics Data System (ADS)
Giacomazzo, Bruno; Rezzolla, Luciano; Baiotti, Luca; Link, David; Font, José A.
2011-08-01
Binary neutron star mergers are one of the possible candidates for the central engine of short gamma-ray bursts (GRBs) and they are also powerful sources of gravitational waves. We have used our fully general relativistic hydrodynamical code Whisky to investigate the merger of binary neutron star systems and we have in particular studied the properties of the tori that can be formed by these systems, their possible connection with the engine of short GRBs and the gravitational wave signals that detectors such as advanced LIGO will be able to detect. We have also shown how the mass of the torus varies as a function of the total mass of the neutron stars composing the binary and of their mass ratio and we have found that tori sufficiently massive to power short GRBs can indeed be formed.
General relativistic magnetohydrodynamical simulations of the jet in M 87
NASA Astrophysics Data System (ADS)
Mościbrodzka, Monika; Falcke, Heino; Shiokawa, Hotaka
2016-02-01
Context. The connection between black hole, accretion disk, and radio jet can be constrained best by fitting models to observations of nearby low-luminosity galactic nuclei, in particular the well-studied sources Sgr A* and M 87. There has been considerable progress in modeling the central engine of active galactic nuclei by an accreting supermassive black hole coupled to a relativistic plasma jet. However, can a single model be applied to a range of black hole masses and accretion rates? Aims: Here we want to compare the latest three-dimensional numerical model, originally developed for Sgr A* in the center of the Milky Way, to radio observations of the much more powerful and more massive black hole in M 87. Methods: We postprocess three-dimensional GRMHD models of a jet-producing radiatively inefficient accretion flow around a spinning black hole using relativistic radiative transfer and ray-tracing to produce model spectra and images. As a key new ingredient in these models, we allow the proton-electron coupling in these simulations depend on the magnetic properties of the plasma. Results: We find that the radio emission in M 87 is described well by a combination of a two-temperature accretion flow and a hot single-temperature jet. Most of the radio emission in our simulations comes from the jet sheath. The model fits the basic observed characteristics of the M 87 radio core: it is "edge-brightened", starts subluminally, has a flat spectrum, and increases in size with wavelength. The best fit model has a mass-accretion rate of Ṁ ~ 9 × 10-3M⊙ yr-1 and a total jet power of Pj ~ 1043 erg s-1. Emission at λ = 1.3 mm is produced by the counter-jet close to the event horizon. Its characteristic crescent shape surrounding the black hole shadow could be resolved by future millimeter-wave VLBI experiments. Conclusions: The model was successfully derived from one for the supermassive black hole in the center of the Milky Way by appropriately scaling mass and
NASA Technical Reports Server (NTRS)
Majumdar, Alok
2013-01-01
The purpose of the paper is to present the analytical capability developed to model no vent chill and fill of cryogenic tank to support CPST (Cryogenic Propellant Storage and Transfer) program. Generalized Fluid System Simulation Program (GFSSP) was adapted to simulate charge-holdvent method of Tank Chilldown. GFSSP models were developed to simulate chilldown of LH2 tank in K-site Test Facility and numerical predictions were compared with test data. The report also describes the modeling technique of simulating the chilldown of a cryogenic transfer line and GFSSP models were developed to simulate the chilldown of a long transfer line and compared with test data.
NASA Technical Reports Server (NTRS)
Kiteley, G. W.; Harris, R. L., Sr.
1978-01-01
Ten student pilots were given a 1 hour training session in the NASA Langley Research Center's General Aviation Simulator by a certified flight instructor and a follow-up flight evaluation was performed by the student's own flight instructor, who has also flown the simulator. The students and instructors generally felt that the simulator session had a positive effect on the students. They recommended that a simulator with a visual scene and a motion base would be useful in performing such maneuvers as: landing approaches, level flight, climbs, dives, turns, instrument work, and radio navigation, recommending that the simulator would be an efficient means of introducing the student to new maneuvers before doing them in flight. The students and instructors estimated that about 8 hours of simulator time could be profitably devoted to the private pilot training.
General specifications for the development of a PC-based simulator of the NASA RECON system
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1984-01-01
The general specifications for the design and implementation of an IBM PC/XT-based simulator of the NASA RECON system, including record designs, file structure designs, command language analysis, program design issues, error recovery considerations, and usage monitoring facilities are discussed. Once implemented, such a simulator will be utilized to evaluate the effectiveness of simulated information system access in addition to actual system usage as part of the total educational programs being developed within the NASA contract.
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1975-01-01
A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.
Gleckler, P.J.; Randall, D.A.; Boer, G.
1995-04-01
This paper summarizes the ocean surface net energy flux simulated by fifteen atmospheric general circulation models constrained by realistically-varying sea surface temperatures and sea ice as part of the Atmospheric Model Intercomparison Project. In general, the simulated energy fluxes are within the very large observational uncertainties. However, the annual mean oceanic meridional heat transport that would be required to balance the simulated surface fluxes is shown to be critically sensitive to the radiative effects to clouds, to the extent that even the sign of the Southern Hemisphere ocean heat transport can be affected by the errors in simulated cloud-radiation interactions. It is suggested that improved treatment of cloud radiative effects should help in the development of coupled atmospheric-ocean general circulation models. 16 refs., 3 figs.
NASA Technical Reports Server (NTRS)
Gleckler, P. J.; Randall, D. A.; Boer, G.; Colman, R.; Dix, M.; Galin, V.; Helfand, M.; Kiehl, J.; Kitoh, A.; Lau, W.
1995-01-01
This paper summarizes the ocean surface net energy flux simulated by fifteen atmospheric general circulation models constrained by realistically-varying sea surface temperatures and sea ice as part of the Atmospheric Model Intercomparison Project. In general, the simulated energy fluxes are within the very large observational uncertainties. However, the annual mean oceanic meridional heat transport that would be required to balance the simulated surface fluxes is shown to be critically sensitive to the radiative effects of clouds, to the extent that even the sign of the Southern Hemisphere ocean heat transport can be affected by the errors in simulated cloud-radiation interactions. It is suggested that improved treatment of cloud radiative effects should help in the development of coupled atmosphere-ocean general circulation models.
Potential of using simulated patients to study the performance of general practitioners.
Kinnersley, P; Pill, R
1993-01-01
A review of the literature on the use of simulated patients is presented. While simulated patients have become established for the education of medical undergraduates, international work suggests that they may also be of value for studying the performance of established general practitioners. A preliminary study is described in which simulated patients were used at practices in Cardiff. Roles were developed which would stimulate a discussion focusing on health risks. No particular practical problems were encountered but concerns were expressed about the validity of the data. Suggestions are made for the further development of the use of simulated patients. PMID:8398247
NASA Technical Reports Server (NTRS)
Dehghani, Navid; Tankenson, Michael
2006-01-01
This paper details an architectural description of the Mission Data Processing and Control System (MPCS), an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is developed based on a set of small reusable components, implemented in Java, each designed with a specific function and well-defined interfaces. An industry standard messaging bus is used to transfer information among system components. Components generate standard messages which are used to capture system information, as well as triggers to support the event-driven architecture of the system. Event-driven systems are highly desirable for processing high-rate telemetry (science and engineering) data, and for supporting automation for many mission operations processes.
A GeneralizedWeight-Based Particle-In-Cell Simulation Scheme
W.W. Lee, T.G. Jenkins and S. Ethier
2010-02-02
A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution (δf) and the full distribution (full-F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage stage of the simulation, while retaining the flexibility of a full-F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.
Simulation of the great plains low-level jet and associated clouds by general circulation models
Ghan, S.J.; Bian, X.; Corsetti, L.
1996-07-01
The low-level jet frequently observed in the Great Plains of the United States forms preferentially at night and apparently influences the timing of the thunderstorms in the region. The authors have found that both the European Centre for Medium-Range Weather Forecasts general circulation model and the National Center for Atmospheric Research Community Climate Model simulate the low-level jet rather well, although the spatial distribution of the jet frequency simulated by the two GCM`s differ considerably. Sensitivity experiments have demonstrated that the simulated low-level jet is surprisingly robust, with similar simulations at much coarser horizontal and vertical resolutions. However, both GCM`s fail to simulate the observed relationship between clouds and the low-level jet. The pronounced nocturnal maximum in thunderstorm frequency associated with the low-level jet is not simulated well by either GCM, with only weak evidence of a nocturnal maximum in the Great Plains. 36 refs., 20 figs.
Deng, Shaozhong; Xue, Changfeng; Baumketner, Andriy; Jacobs, Donald; Cai, Wei
2013-01-01
This paper extends the image charge solvation model (ICSM) [J. Chem. Phys. 131, 154103 (2009)], a hybrid explicit/implicit method to treat electrostatic interactions in computer simulations of biomolecules formulated for spherical cavities, to prolate spheroidal and triaxial ellipsoidal cavities, designed to better accommodate non-spherical solutes in molecular dynamics (MD) simulations. In addition to the utilization of a general truncated octahedron as the MD simulation box, central to the proposed extension is an image approximation method to compute the reaction field for a point charge placed inside such a non-spherical cavity by using a single image charge located outside the cavity. The resulting generalized image charge solvation model (GICSM) is tested in simulations of liquid water, and the results are analyzed in comparison with those obtained from the ICSM simulations as a reference. We find that, for improved computational efficiency due to smaller simulation cells and consequently a less number of explicit solvent molecules, the generalized model can still faithfully reproduce known static and dynamic properties of liquid water at least for systems considered in the present paper, indicating its great potential to become an accurate but more efficient alternative to the ICSM when bio-macromolecules of irregular shapes are to be simulated. PMID:23913979
NASA Technical Reports Server (NTRS)
Lee, J.-J.
1976-01-01
In anticipation of extremely heavy loading requirements by the Viking mission during the post-landing periods, a GPSS model has been developed for the purpose of simulating these requirements on the Viking batch computer system. This paper presents the effort pursued in evaluating such a model and results thereby obtained. The evaluation effort consists of selecting the evaluation approach, collecting actual test run data, making comparisons and deriving conclusions.
Fuerst, Steven V.; Mizuno, Yosuke; Nishikawa, Ken-Ichi; Wu, Kinwah; /Mullard Space Sci. Lab.
2007-01-05
We calculate the emission from relativistic flows in black hole systems using a fully general relativistic radiative transfer formulation, with flow structures obtained by general relativistic magneto-hydrodynamic simulations. We consider thermal free-free emission and thermal synchrotron emission. Bright filament-like features protrude (visually) from the accretion disk surface, which are enhancements of synchrotron emission where the magnetic field roughly aligns with the line-of-sight in the co-moving frame. The features move back and forth as the accretion flow evolves, but their visibility and morphology are robust. We propose that variations and drifts of the features produce certain X-ray quasi-periodic oscillations (QPOs) observed in black-hole X-ray binaries.
Using a million cell simulation of the cerebellum: network scaling and task generality
Li, Wen-Ke; Hausknecht, Matthew J.; Stone, Peter H.; Mauk, Michael D.
2012-01-01
Several factors combine to make it feasible to build computer simulations of the cerebellum and to test them in biologically realistic ways. These simulations can be used to help understand the computational contributions of various cerebellar components, including the relevance of the enormous number of neurons in the granule cell layer. In previous work we have used a simulation containing 12000 granule cells to develop new predictions and to account for various aspects of eyelid conditioning, a form of motor learning mediated by the cerebellum. Here we demonstrate the feasibility of scaling up this simulation to over one million granule cells using parallel graphics processing unit (GPU) technology. We observe that this increase in number of granule cells requires only twice the execution time of the smaller simulation on the GPU. We demonstrate that this simulation, like its smaller predecessor, can emulate certain basic features of conditioned eyelid responses, with a slight improvement in performance in one measure. We also use this simulation to examine the generality of the computation properties that we have derived from studying eyelid conditioning. We demonstrate that this scaled up simulation can learn a high level of performance in a classic machine learning task, the cart-pole balancing task. These results suggest that this parallel GPU technology can be used to build very large-scale simulations whose connectivity ratios match those of the real cerebellum and that these simulations can be used guide future studies on cerebellar mediated tasks and on machine learning problems. PMID:23200194
General purpose simulation system of the data management system for Space Shuttle mission 18
NASA Technical Reports Server (NTRS)
Bengtson, N. M.; Mellichamp, J. M.; Smith, O. C.
1976-01-01
A simulation program for the flow of data through the Data Management System of Spacelab and Space Shuttle was presented. The science, engineering, command and guidance, navigation and control data were included. The programming language used was General Purpose Simulation System V (OS). The science and engineering data flow was modeled from its origin at the experiments and subsystems to transmission from Space Shuttle. Command data flow was modeled from the point of reception onboard and from the CDMS Control Panel to the experiments and subsystems. The GN&C data flow model handled data between the General Purpose Computer and the experiments and subsystems. Mission 18 was the particular flight chosen for simulation. The general structure of the program is presented, followed by a user's manual. Input data required to make runs are discussed followed by identification of the output statistics. The appendices contain a detailed model configuration, program listing and results.
NASA Astrophysics Data System (ADS)
Vincze, László; Janssens, Koen; Adams, Fred; Rivers, M. L.; Jones, K. W.
1995-03-01
A general Monte Carlo code for the simulation of X-ray fluorescence spectrometers, described in a previous paper is extended to predict the spectral response of instruments employing polarized exciting radiation. Details of the calculation method specific for the correct simulation of photon-matter scatter interactions in case of polarized X-ray beams are presented. Comparisons are made with experimentally collected spectral data obtained from a monochromatic X-ray fluorescence setup installed at a synchrotron radiation source. The use of the simulation code for quantitative analysis of intermediate and massive samples is also demonstrated.
NASA Technical Reports Server (NTRS)
Lutz, R. J.; Spar, J.
1978-01-01
The Hansen atmospheric model was used to compute five monthly forecasts (October 1976 through February 1977). The comparison is based on an energetics analysis, meridional and vertical profiles, error statistics, and prognostic and observed mean maps. The monthly mean model simulations suffer from several defects. There is, in general, no skill in the simulation of the monthly mean sea-level pressure field, and only marginal skill is indicated for the 850 mb temperatures and 500 mb heights. The coarse-mesh model appears to generate a less satisfactory monthly mean simulation than the finer mesh GISS model.
General circulation model simulations of winter and summer sea-level pressures over North America
McCabe, G.J.; Legates, D.R.
1992-01-01
In this paper, observed sea-level pressures were used to evaluate winter and summer sea-level pressures over North America simulated by the Goddard Institute for Space Studies (GISS) and the Geophysical Fluid Dynamics Laboratory (GFDL) general circulation models. The objective of the study is to determine how similar the spatial and temporal distributions of GCM-simulated daily sea-level pressures over North America are to observed distributions. Overall, both models are better at reproducing observed within-season variance of winter and summer sea-level pressures than they are at simulating the magnitude of mean winter and summer sea-level pressures. -from Authors
NASA Technical Reports Server (NTRS)
Harvey, Jason; Moore, Michael
2013-01-01
The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.
Development and evaluation of a general aviation real world noise simulator
NASA Technical Reports Server (NTRS)
Galanter, E.; Popper, R.
1980-01-01
An acoustic playback system is described which realistically simulates the sounds experienced by the pilot of a general aviation aircraft during engine idle, take-off, climb, cruise, descent, and landing. The physical parameters of the signal as they appear in the simulator environment are compared to analogous parameters derived from signals recorded during actual flight operations. The acoustic parameters of the simulated and real signals during cruise conditions are within plus or minus two dB in third octave bands from 0.04 to 4 kHz. The overall A-weighted levels of the signals are within one dB of signals generated in the actual aircraft during equivalent maneuvers. Psychoacoustic evaluations of the simulator signal are compared with similar measurements based on transcriptions of actual aircraft signals. The subjective judgments made by human observers support the conclusion that the simulated sound closely approximates transcribed sounds of real aircraft.
The hardware accelerator array for logic simulation
Hansen, N H
1991-05-01
Hardware acceleration exploits the parallelism inherent in large circuit simulations to achieve significant increases in performance. Simulation accelerators have been developed based on the compiled code algorithm or the event-driven algorithm. The greater flexibility of the event-driven algorithm has resulted in several important developments in hardware acceleration architecture. Some popular commercial products have been developed based on the event-driven algorithm and data-flow architectures. Conventional data-flow architectures require complex switching networks to distribute operands among processing elements resulting in considerable overhead. An accelerator array architecture based on a nearest-neighbor communication has been developed in this thesis. The design is simulated in detail at the behavioral level. Its performance is evaluated and shown to be superior to that of a conventional data-flow accelerator. 14 refs., 48 figs., 5 tabs.
Estimating plant available water for general crop simulations in ALMANAC/APEX/EPIC/SWAT
Technology Transfer Automated Retrieval System (TEKTRAN)
Process-based simulation models ALMANAC/APEX/EPIC/SWAT contain generalized plant growth subroutines to predict biomass and crop yield. Environmental constraints typically restrict plant growth and yield. Water stress is often an important limiting factor; it is calculated as the sum of water use f...
Computer considerations for real time simulation of a generalized rotor model
NASA Technical Reports Server (NTRS)
Howe, R. M.; Fogarty, L. E.
1977-01-01
Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.
Chelli, Riccardo; Signorini, Giorgio F
2012-03-13
Serial generalized ensemble simulations, such as simulated tempering, enhance phase space sampling through non-Boltzmann weighting protocols. The most critical aspect of these methods with respect to the popular replica exchange schemes is the difficulty in determining the weight factors which enter the criterion for accepting replica transitions between different ensembles. Recently, a method, called BAR-SGE, was proposed for estimating optimal weight factors by resorting to a self-consistent procedure applied during the simulation (J. Chem. Theory Comput.2010, 6, 1935-1950). Calculations on model systems have shown that BAR-SGE outperforms other approaches proposed for determining optimal weights in serial generalized ensemble simulations. However, extensive tests on real systems and on convergence features with respect to the replica exchange method are lacking. Here, we report on a thorough analysis of BAR-SGE by performing molecular dynamics simulations of a solvated alanine dipeptide, a system often used as a benchmark to test new computational methodologies, and comparing results to the replica exchange method. To this aim, we have supplemented the ORAC program, a FORTRAN suite for molecular dynamics simulations (J. Comput. Chem.2010, 31, 1106-1116), with several variants of the BAR-SGE technique. An illustration of the specific BAR-SGE algorithms implemented in the ORAC program is also provided. PMID:26593345
Xiao, Yong; Holod, Ihor; Wang, Zhixuan; Lin, Zhihong; Zhang, Taige
2015-02-15
Developments in gyrokinetic particle simulation enable the gyrokinetic toroidal code (GTC) to simulate turbulent transport in tokamaks with realistic equilibrium profiles and plasma geometry, which is a critical step in the code–experiment validation process. These new developments include numerical equilibrium representation using B-splines, a new Poisson solver based on finite difference using field-aligned mesh and magnetic flux coordinates, a new zonal flow solver for general geometry, and improvements on the conventional four-point gyroaverage with nonuniform background marker loading. The gyrokinetic Poisson equation is solved in the perpendicular plane instead of the poloidal plane. Exploiting these new features, GTC is able to simulate a typical DIII-D discharge with experimental magnetic geometry and profiles. The simulated turbulent heat diffusivity and its radial profile show good agreement with other gyrokinetic codes. The newly developed nonuniform loading method provides a modified radial transport profile to that of the conventional uniform loading method.
Chiang, P P-C; Glance, D; Walker, J; Walter, F M; Emery, J D
2015-01-01
Background: Reducing diagnostic delays in primary care by improving the assessment of symptoms associated with cancer could have significant impacts on cancer outcomes. Symptom risk assessment tools could improve the diagnostic assessment of patients with symptoms suggestive of cancer in primary care. We aimed to explore the use of a cancer risk tool, which implements the QCancer model, in consultations and its potential impact on clinical decision making. Methods: We implemented an exploratory ‘action design' method with 15 general practitioners (GPs) from Victoria, Australia. General practitioners applied the risk tool in simulated consultations, conducted semi-structured interviews based on the normalisation process theory and explored issues relating to implementation of the tool. Results: The risk tool was perceived as being potentially useful for patients with complex histories. More experienced GPs were distrustful of the risk output, especially when it conflicted with their clinical judgement. Variable interpretation of symptoms meant that there was significant variation in risk assessment. When a risk output was high, GPs were confronted with numerical risk outputs creating challenges in consultation. Conclusions: Significant barriers to implementing electronic cancer risk assessment tools in consultation could limit their uptake. These relate not only to the design and integration of the tool but also to variation in interpretation of clinical histories, and therefore variable risk outputs and strong beliefs in personal clinical intuition. PMID:25734392
The Tropical Subseasonal Variability Simulated in the NASA GISS General Circulation Model
NASA Technical Reports Server (NTRS)
Kim, Daehyun; Sobel, Adam H.; DelGenio, Anthony D.; Chen, Yonghua; Camargo, Suzana J.; Yao, Mao-Sung; Kelley, Maxwell; Nazarenko, Larissa
2012-01-01
The tropical subseasonal variability simulated by the Goddard Institute for Space Studies general circulation model, Model E2, is examined. Several versions of Model E2 were developed with changes to the convective parameterization in order to improve the simulation of the Madden-Julian oscillation (MJO). When the convective scheme is modified to have a greater fractional entrainment rate, Model E2 is able to simulate MJO-like disturbances with proper spatial and temporal scales. Increasing the rate of rain reevaporation has additional positive impacts on the simulated MJO. The improvement in MJO simulation comes at the cost of increased biases in the mean state, consistent in structure and amplitude with those found in other GCMs when tuned to have a stronger MJO. By reinitializing a relatively poor-MJO version with restart files from a relatively better-MJO version, a series of 30-day integrations is constructed to examine the impacts of the parameterization changes on the organization of tropical convection. The poor-MJO version with smaller entrainment rate has a tendency to allow convection to be activated over a broader area and to reduce the contrast between dry and wet regimes so that tropical convection becomes less organized. Besides the MJO, the number of tropical-cyclone-like vortices simulated by the model is also affected by changes in the convection scheme. The model simulates a smaller number of such storms globally with a larger entrainment rate, while the number increases significantly with a greater rain reevaporation rate.
NASA Astrophysics Data System (ADS)
Peng, Zezhong
1992-01-01
A generalized energy transport (G-ET) model is introduced. This model incorporates the effects of non -analytic carrier distribution functions and the dominant scattering process on the formulation of the energy transport model, also includes effects of the electron transfer between the lower valley occurs in multivalley semi-conductors. A path-integration and slope-weighting Monte Carlo (PSMC) method is introduced to speed up the conventional MC method, and to improve its accuracy and smoothness. A stable extended S-G discretization algorithm was developed for the G-ET model. Further, many numerical techniques, including methods of mesh auto generation, updating and scaling, trial solution with 2D extrapolation, a global convergence test, a convergence refining, a forced -damping and residual-current filtering, were developed to improve the convergence and the computation efficiency. UMDFET2, a general submicron device simulator, was implemented with G-ET model, an efficient hot electron injection model, a Fowler-Nordheim tunneling model, an impact ionization model, and a model for band-to-band tunneling have also been added. A discretized gate capacitor (DGC) EPROM model and post-processing quasi-transient (PPQT) method has been introduced to efficiently and accurately simulate EPROM devices. Deep submicron NMOS devices have been simulated to study velocity overshoot and hot electron effects. UMDFET2 has been successfully used to predict the V_{t}, I_{ds}, I_{sub}, I_{g}, the programming and erasing characteristics V_ {t}(t) of submicron EPROM/Flash devices. A "Virtual Fab", which consists of statistics analysis tool for experimental design and data analysis, SUPREM3/4 for process simulation, and UMDFET2 for device simulation, has been used successfully for EPROM device design and optimization, and has demonstrated a good predicting ability with excellent overall accuracy. The correlation of the ET models and MC models has been studied, and it has been found that
The Early Jurassic climate: General circulation model simulations and the paleoclimate record
Chandler, M.A.
1992-01-01
This thesis presents the results of several general circulation model simulations of the Early Jurassic climate. The general circulation model employed was developed at the Goddard Institute for Space Studies while most paleoclimate data were provided by the Paleographic Atlas Project of the University of Chicago. The first chapter presents an Early Jurassic base simulation, which uses detailed reconstructions of paleogeography, vegetation, and sea surface temperature as boundary condition data sets. The resulting climatology reveals an Earth 5.2[degrees]C warmer, globally, than at present and a latitudinal temperature gradient dominated by high-latitude warming (+20[degrees]C) and little tropical change (+1[degrees]C). Comparisons show a good correlation between simulated results and paleoclimate data. Sensitivity experiments are used to investigate any model-data mismatches. Chapters two and three discuss two important aspects of Early Jurassic climate, continental aridity and global warming. Chapter two focuses on the hydrological capabilities of the general circulation model. The general circulation model's hydrologic diagnostics are evaluated, using the distribution of modern deserts and Early Jurassic paleoclimate data as validating constraints. A new method, based on general circulation model diagnostics and empirical formulae, is proposed for evaluating moisture balance. Chapter three investigates the cause of past global warming, concentrating on the role of increased ocean heat transport. Early Jurassic simulations show that increased ocean heat transports may have been a major factor in past climates. Increased ocean heat transports create latitudinal temperature gradients that closely approximate paleoclimate data and solve the problem of tropical overheating that results from elevated atmospheric carbon dioxide. Increased carbon dioxide cannot duplicate the Jurassic climate without also including increased ocean heat transports.
NASA Astrophysics Data System (ADS)
Paschalidis, Vasileios; Etienne, Zachariah B.; Shapiro, Stuart L.
2013-07-01
We perform the first general relativistic force-free simulations of neutron star magnetospheres in orbit about spinning and nonspinning black holes. We find promising precursor electromagnetic emission: typical Poynting luminosities at, e.g., an orbital separation of r=6.6RNS are LEM˜6×1042(BNS,p/1013G)2(MNS/1.4M⊙)2erg/s. The Poynting flux peaks within a broad beam of ˜40° in the azimuthal direction and within ˜60° from the orbital plane, establishing a possible lighthouse effect. Our calculations, though preliminary, preview more detailed simulations of these systems that we plan to perform in the future.
Wang, Qin; Wang, Xiang-Bin
2014-01-01
We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000
Generalized math model for simulation of high-altitude balloon systems
NASA Technical Reports Server (NTRS)
Nigro, N. J.; Elkouh, A. F.; Hinton, D. E.; Yang, J. K.
1985-01-01
Balloon systems have proved to be a cost-effective means for conducting research experiments (e.g., infrared astronomy) in the earth's atmosphere. The purpose of this paper is to present a generalized mathematical model that can be used to simulate the motion of these systems once they have attained float altitude. The resulting form of the model is such that the pendulation and spin motions of the system are uncoupled and can be analyzed independently. The model is evaluated by comparing the simulation results with data obtained from an actual balloon system flown by NASA.
Gleckler, P.J.; Randall, D.A.; Boer, G.
1994-03-01
This paper reports on energy fluxes across the surface of the ocean as simulated by fifteen atmospheric general circulation models in which ocean surface temperatures and sea-ice boundaries are prescribed. The oceanic meridional energy transport that would be required to balance these surface fluxes is computed, and is shown to be critically sensitive to the radiative effects of clouds, to the extent that even the sign of the Southern Hemisphere ocean energy transport can be affected by the errors in simulated cloud-radiation interactions.
Simulator Evaluation of Runway Incursion Prevention Technology for General Aviation Operations
NASA Technical Reports Server (NTRS)
Jones, Denise R.; Prinzel, Lawrence J., III
2011-01-01
A Runway Incursion Prevention System (RIPS) has been designed under previous research to enhance airport surface operations situation awareness and provide cockpit alerts of potential runway conflict, during transport aircraft category operations, in order to prevent runway incidents while also improving operations capability. This study investigated an adaptation of RIPS for low-end general aviation operations using a fixed-based simulator at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the study was to evaluate modified RIPS aircraft-based incursion detection algorithms and associated alerting and airport surface display concepts for low-end general aviation operations. This paper gives an overview of the system, simulation study, and test results.
NASA Astrophysics Data System (ADS)
Lombardi, G.; Sarazin, M.
2016-01-01
Recent studies on the comparison between the Multi Aperture Scintillation Sensor (MASS) and Generalized Scintillation Detection and Ranging (G-SCIDAR) profiler techniques have suggested significant discrepancies between the results delivered by the two instruments. MASS has been largely used in the recent site testing campaigns for the future next generation giant telescopes [i.e. the European Extremely Large Telescope, the Thirty Meter Telescope (TMT) and the Giant Magellan Telescope (GMT)] and is still used to monitor the conditions of world-class astronomical sites, as well as to deliver free atmosphere turbulence profiles to feed Adaptive Optics performance simulations. In this paper, we explore a different approach in the comparison between MASS and Generalized SCIDAR techniques with respect to previous studies, in order to provide a method for the use of the MASS data bases accumulated at European Southern Obseratory Paranal Observatory in Adaptive Optics simulations.
SIMULATION OF GENERAL ANESTHESIA ON THE "SIMMAN 3G" AND ITS EFFICIENCY.
Potapov, A F; Matveev, A S; Ignatiev, V G; Ivanova, A A; Aprosimov, L A
2015-01-01
In recent years in medical educational process new innovative technologies are widely used with computer simulation, providing the reality of medical intervations and procedures. Practice-training teaching with using of simulation allows to improve the efficiency of learning material at the expense of creating imaginary professional activity and leading barring material to practical activity. The arm of the investigation is evaluation of the students training efficiency at the Medical Institute on the topic "General Anesthesia with using a modern simulation "SimMan 3 G". The material of the investigation is the results, carried out on the basis of the Centre of Practical skills and medical virtual educational technologies (Simulation Centre) at the Medical Institute of NEFU by M.K. Ammosov. The Object of the investigation was made up by 55 students of the third (3) course of the Faculty of General Medicine of the Medical Institute of NEFU. The investigation was hold during practical trainings (April-May 2014) of the General Surgery Department on the topic "General Anesthesia". A simulation practical course "General Anesthesia" consisted of 12 academic hours. Practical training was carried out using instruments, equipments and facilities to install anesthesia on the SimMan 3G with shooting the process and further discussions of the results. The methods of the investigations were the appreciation of students background knowledge before and after practical training (by 5 points scale) and the analysis of the results. The results of the investigation showed that before the practical course only 23 students (41.8%) had dot positive marks: "Good"--7 students (12.7%) and "Satisfactory"--16 (29.1%) students. The rest 22 (58.2%) students had bad results. The practical trainings using real instruments, equipments and facilities with imitation of installation of preparations for introductory anesthesia, main analgesics and muscle relaxants showed a patients reaction on the
Boer, G.J.; Mcfarlane, N.A.; Lazare, M. )
1992-10-01
The Canadian Climate Centre second-generation atmospheric general circulation model coupled to a mixed-layer ocean incorporating thermodynamic sea ice is used to simulate the equilibrium climate response to a doubling of CO[sub 2]. The results of the simulation indicate a global annual warming of 3.5 C with enhanced warming found over land and at higher latitudes. Precipitation and evaporation rates increase by about 4 percent, and cloud cover decreases by 2.2 percent. Soil moisture decreases over continental Northern Hemisphere land areas in summer. The frozen component of soil moisture decreases and the liquid component increases in association with the increase of temperature at higher latitudes. The simulated accumulation rate of permanent snow cover decreases markedly over Greenland and increases slightly over Antarctica. Seasonal snow and sea ice boundaries retreat, but local decreases in planetary albedo are counteracted by tropical increases, so there is little change in the global average. 39 refs.
Improved Carbohydrate Structure Generalization Scheme for (1)H and (13)C NMR Simulations.
Kapaev, Roman R; Toukach, Philip V
2015-07-21
The improved Carbohydrate Structure Generalization Scheme has been developed for the simulation of (13)C and (1)H NMR spectra of oligo- and polysaccharides and their derivatives, including those containing noncarbohydrate constituents found in natural glycans. Besides adding the (1)H NMR calculations, we improved the accuracy and performance of prediction and optimized the mathematical model of the precision estimation. This new approach outperformed other methods of chemical shift simulation, including database-driven, neural net-based, and purely empirical methods and quantum-mechanical calculations at high theory levels. It can process structures with rarely occurring and noncarbohydrate constituents unsupported by the other methods. The algorithm is transparent to users and allows tracking used reference NMR data to original publications. It was implemented in the Glycan-Optimized Dual Empirical Spectrum Simulation (GODESS) web service, which is freely available at the platform of the Carbohydrate Structure Database (CSDB) project ( http://csdb.glycoscience.ru). PMID:26087011
GOOSE 1. 4 -- Generalized Object-Oriented Simulation Environment user's manual
Nypaver, D.J. ); Abdalla, M.A. ); Guimaraes, L. , Sao Jose dos Campos, SP . Inst. de Estudos Avancados)
1992-11-01
The Generalized Object-Oriented Simulation Environment (GOOSE) is a new and innovative simulation tool that is being developed by the Simulation Group of the Advanced Controls Program at Oak Ridge National Laboratory. GOOSE is a fully interactive prototype software package that provides users with the capability of creating sophisticated mathematical models of physical systems. GOOSE uses an object-oriented approach to modeling and combines the concept of modularity (building a complex model easily from a collection of previously written components) with the additional features of allowing precompilation, optimization, and testing and validation of individual modules. Once a library of components has been defined and compiled, models can be built and modified without recompilation. This user's manual provides detailed descriptions of the structure and component features of GOOSE, along with a comprehensive example using a simplified model of a pressurized water reactor.
GOOSE 1.4 -- Generalized Object-Oriented Simulation Environment user`s manual
Nypaver, D.J.; Abdalla, M.A.; Guimaraes, L.
1992-11-01
The Generalized Object-Oriented Simulation Environment (GOOSE) is a new and innovative simulation tool that is being developed by the Simulation Group of the Advanced Controls Program at Oak Ridge National Laboratory. GOOSE is a fully interactive prototype software package that provides users with the capability of creating sophisticated mathematical models of physical systems. GOOSE uses an object-oriented approach to modeling and combines the concept of modularity (building a complex model easily from a collection of previously written components) with the additional features of allowing precompilation, optimization, and testing and validation of individual modules. Once a library of components has been defined and compiled, models can be built and modified without recompilation. This user`s manual provides detailed descriptions of the structure and component features of GOOSE, along with a comprehensive example using a simplified model of a pressurized water reactor.
Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.
Coimbra, João T S; Sousa, Sérgio F; Fernandes, Pedro A; Rangel, Maria; Ramos, Maria J
2014-01-01
The AMBER family of force fields is one of the most commonly used alternatives to describe proteins and drug-like molecules in molecular dynamics simulations. However, the absence of a specific set of parameters for lipids has been limiting the widespread application of this force field in biomembrane simulations, including membrane protein simulations and drug-membrane simulations. Here, we report the systematic parameterization of 12 common lipid types consistent with the General Amber Force Field (GAFF), with charge-parameters determined with RESP at the HF/6-31G(d) level of theory, to be consistent with AMBER. The accuracy of the scheme was evaluated by comparing predicted and experimental values for structural lipid properties in MD simulations in an NPT ensemble with explicit solvent in 100:100 bilayer systems. Globally, a consistent agreement with experimental reference data on membrane structures was achieved for some lipid types when using the typical MD conditions normally employed when handling membrane proteins and drug-membrane simulations (a tensionless NPT ensemble, 310 K), without the application of any of the constraints often used in other biomembrane simulations (such as the surface tension and the total simulation box area). The present set of parameters and the universal approach used in the parameterization of all the lipid types described here, as well as the consistency with the AMBER force field family, together with the tensionless NPT ensemble used, opens the door to systematic studies combining lipid components with small drug-like molecules or membrane proteins and show the potential of GAFF in dealing with biomembranes.
NASA Astrophysics Data System (ADS)
Anantua, Richard; Roger Blandford, Jonathan McKinney and Alexander Tchekhovskoy
2016-01-01
We carry out the process of "observing" simulations of active galactic nuclei (AGN) with relativistic jets (hereafter called jet/accretion disk/black hole (JAB) systems) from ray tracing between image plane and source to convolving the resulting images with a point spread function. Images are generated at arbitrary observer angle relative to the black hole spin axis by implementing spatial and temporal interpolation of conserved magnetohydrodynamic flow quantities from a time series of output datablocks from fully general relativistic 3D simulations. We also describe the evolution of simulations of JAB systems' dynamical and kinematic variables, e.g., velocity shear and momentum density, respectively, and the variation of these variables with respect to observer polar and azimuthal angles. We produce, at frequencies from radio to optical, fixed observer time intensity and polarization maps using various plasma physics motivated prescriptions for the emissivity function of physical quantities from the simulation output, and analyze the corresponding light curves. Our hypothesis is that this approach reproduces observed features of JAB systems such as superluminal bulk flow projections and quasi-periodic oscillations in the light curves more closely than extant stylized analytical models, e.g., cannonball bulk flows. Moreover, our development of user-friendly, versatile C++ routines for processing images of state-of-the-art simulations of JAB systems may afford greater flexibility for observing a wide range of sources from high power BL-Lacs to low power quasars (possibly with the same simulation) without requiring years of observation using multiple telescopes. Advantages of observing simulations instead of observing astrophysical sources directly include: the absence of a diffraction limit, panoramic views of the same object and the ability to freely track features. Light travel time effects become significant for high Lorentz factor and small angles between
Using Beowulf clusters to speed up neural simulations.
Smith, Leslie S.
2002-06-01
Simulation of large neural systems on PCs requires large amounts of memory, and takes a long time. Parallel computers can speed them up. A new form of parallel computer, the Beowulf cluster, is an affordable version. Event-driven simulation and processor farming are two ways of exploiting this parallelism in neural simulations.
NASA Astrophysics Data System (ADS)
Dong, S.
2015-02-01
We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N ⩾ 2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N - 1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N - 1) strongly-coupled phase field equations for general order parameters into 2 (N - 1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir-de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.
Dong, S.
2015-02-15
We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.
NASA Astrophysics Data System (ADS)
Sugimoto, Norihiko; AFES project team
2016-10-01
We have developed an atmospheric general circulation model (AGCM) for Venus on the basis of AFES (AGCM For the Earth Simulator) and performed a high-resolution simulation (e.g., Sugimoto et al., 2014a). The highest resolution is T639L120; 1920 times 960 horizontal grids (grid intervals are about 20 km) with 120 vertical layers (layer intervals are about 1 km). In the model, the atmosphere is dry and forced by the solar heating with the diurnal and semi-diurnal components. The infrared radiative process is simplified by adopting Newtonian cooling approximation. The temperature is relaxed to a prescribed horizontally uniform temperature distribution, in which a layer with almost neutral static stability observed in the Venus atmosphere presents. A fast zonal wind in a solid-body rotation is given as the initial state.Starting from this idealized superrotation, the model atmosphere reaches a quasi-equilibrium state within 1 Earth year and this state is stably maintained for more than 10 Earth years. The zonal-mean zonal flow with weak midlatitude jets has almost constant velocity of 120 m/s in latitudes between 45°S and 45°N at the cloud top levels, which agrees very well with observations. In the cloud layer, baroclinic waves develop continuously at midlatitudes and generate Rossby-type waves at the cloud top (Sugimoto et al., 2014b). At the polar region, warm polar vortex surrounded by a cold latitude band (cold collar) is well reproduced (Ando et al., 2016). As for horizontal kinetic energy spectra, divergent component is broadly (k > 10) larger than rotational component compared with that on Earth (Kashimura et al., in preparation). We will show recent results of the high-resolution run, e.g., small-scale gravity waves attributed to large-scale thermal tides. Sugimoto, N. et al. (2014a), Baroclinic modes in the Venus atmosphere simulated by GCM, Journal of Geophysical Research: Planets, Vol. 119, p1950-1968.Sugimoto, N. et al. (2014b), Waves in a Venus general
Binary black-hole mergers in magnetized disks: simulations in full general relativity.
Farris, Brian D; Gold, Roman; Paschalidis, Vasileios; Etienne, Zachariah B; Shapiro, Stuart L
2012-11-30
We present results from the first fully general relativistic, magnetohydrodynamic (MHD) simulations of an equal-mass black-hole binary (BHBH) in a magnetized, circumbinary accretion disk. We simulate both the pre- and postdecoupling phases of a BHBH-disk system and both "cooling" and "no-cooling" gas flows. Prior to decoupling, the competition between the binary tidal torques and the effective viscous torques due to MHD turbulence depletes the disk interior to the binary orbit. However, it also induces a two-stream accretion flow and mildly relativistic polar outflows from the BHs. Following decoupling, but before gas fills the low-density "hollow" surrounding the remnant, the accretion rate is reduced, while there is a prompt electromagnetic luminosity enhancement following merger due to shock heating and accretion onto the spinning BH remnant. This investigation, though preliminary, previews more detailed general relativistic, MHD simulations we plan to perform in anticipation of future, simultaneous detections of gravitational and electromagnetic radiation from a merging BHBH-disk system.
NASA Technical Reports Server (NTRS)
Kovalskyy, V.; Henebry, G. M.; Adusei, B.; Hansen, M.; Roy, D. P.; Senay, G.; Mocko, D. M.
2011-01-01
A new model coupling scheme with remote sensing data assimilation was developed for estimation of daily actual evapotranspiration (ET). The scheme represents a mix of the VegET, a physically based model to estimate ET from a water balance, and an event driven phenology model (EDPM), where the EDPM is an empirically derived crop specific model capable of producing seasonal trajectories of canopy attributes. In this experiment, the scheme was deployed in a spatially explicit manner within the croplands of the Northern Great Plains. The evaluation was carried out using 2007-2009 land surface forcing data from the North American Land Data Assimilation System (NLDAS) and crop maps derived from remotely sensed data of NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). We compared the canopy parameters produced by the phenology model with normalized difference vegetation index (NDVI) data derived from the MODIS nadir bi-directional reflectance distribution function (BRDF) adjusted reflectance (NBAR) product. The expectations of the EDPM performance in prognostic mode were met, producing determination coefficient (r2) of 0.8 +/-.0.15. Model estimates of NDVI yielded root mean square error (RMSE) of 0.1 +/-.0.035 for the entire study area. Retrospective correction of canopy dynamics with MODIS NDVI brought the errors down to just below 10% of observed data range. The ET estimates produced by the coupled scheme were compared with ones from the MODIS land product suite. The expected r2=0.7 +/-.15 and RMSE = 11.2 +/-.4 mm per 8 days were met and even exceeded by the coupling scheme0 functioning in both prognostic and retrospective modes. Minor setbacks of the EDPM and VegET performance (r2 about 0.5 and additional 30 % of RMSR) were found on the peripheries of the study area and attributed to the insufficient EDPM training and to spatially varying accuracy of crop maps. Overall the experiment provided sufficient evidence of soundness and robustness of the EDPM and
NASA Astrophysics Data System (ADS)
Yan, Jiawei; Ke, Youqi
2016-07-01
Electron transport properties of nanoelectronics can be significantly influenced by the inevitable and randomly distributed impurities/defects. For theoretical simulation of disordered nanoscale electronics, one is interested in both the configurationally averaged transport property and its statistical fluctuation that tells device-to-device variability induced by disorder. However, due to the lack of an effective method to do disorder averaging under the nonequilibrium condition, the important effects of disorders on electron transport remain largely unexplored or poorly understood. In this work, we report a general formalism of Green's function based nonequilibrium effective medium theory to calculate the disordered nanoelectronics. In this method, based on a generalized coherent potential approximation for the Keldysh nonequilibrium Green's function, we developed a generalized nonequilibrium vertex correction method to calculate the average of a two-Keldysh-Green's-function correlator. We obtain nine nonequilibrium vertex correction terms, as a complete family, to express the average of any two-Green's-function correlator and find they can be solved by a set of linear equations. As an important result, the averaged nonequilibrium density matrix, averaged current, disorder-induced current fluctuation, and averaged shot noise, which involve different two-Green's-function correlators, can all be derived and computed in an effective and unified way. To test the general applicability of this method, we applied it to compute the transmission coefficient and its fluctuation with a square-lattice tight-binding model and compared with the exact results and other previously proposed approximations. Our results show very good agreement with the exact results for a wide range of disorder concentrations and energies. In addition, to incorporate with density functional theory to realize first-principles quantum transport simulation, we have also derived a general form of
Nonparametric simulation-based statistics for detecting linkage in general pedigrees
Davis, S.; Schroeder, M.; Weeks, D.E.; Goldin, L.R.
1996-04-01
We present here four nonparametric statistics for linkage analysis that test whether pairs of affected relatives share marker alleles more often than expected. These statistics are based on simulating the null distribution of a given statistic conditional on the unaffecteds` marker genotypes. Each statistic uses a different measure of marker sharing: the SimAPM statistic uses the simulation-based affected-pedigree-member measure based on identity-by-state (IBS) sharing. The SimKIN (kinship) measure is 1.0 for identity-by-descent (IBD) sharing, 0.0 for no IBD sharing, and the kinship coefficient when the IBD status is ambiguous. The simulation-based IBD (SimIBD) statistic uses a recursive algorithm to determine the probability of two affecteds sharing a specific allele IBD. The SimISO statistic is identical to SimIBD, except that it also measures marker similarity between unaffected pairs. We evaluated our statistics on data simulated under different two-locus disease models, comparing our results to those obtained with several other nonparametric statistics. Use of IBD information produces dramatic increases in power over the SimAPM method, which uses only IBS information. The power of our best statistic in most cases meets or exceeds the power of the other nonparametric statistics. Furthermore, our statistics perform comparisons between all affected relative pairs within general pedigrees and are not restricted to sib pairs or nuclear families. 32 refs., 5 figs., 6 tabs.
Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born.
Götz, Andreas W; Williamson, Mark J; Xu, Dong; Poole, Duncan; Le Grand, Scott; Walker, Ross C
2012-05-01
We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031
Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born
2012-01-01
We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031
Nonparametric simulation-based statistics for detecting linkage in general pedigrees.
Davis, S.; Schroeder, M.; Goldin, L. R.; Weeks, D. E.
1996-01-01
We present here four nonparametric statistics for linkage analysis that test whether pairs of affected relatives share marker alleles more often than expected. These statistics are based on simulating the null distribution of a given statistic conditional on the unaffecteds' marker genotypes. Each statistic uses a different measure of marker sharing: the SimAPM statistic uses the simulation-based affected-pedigree-member measure based on identity-by-state (IBS) sharing. The SimKIN (kinship) measure is 1.0 for identity-by-descent (IBD) sharing, 0.0 for no IBD status sharing, and the kinship coefficient when the IBD status is ambiguous. The simulation-based IBD (SimIBD) statistic uses a recursive algorithm to determine the probability of two affecteds sharing a specific allele IBD. The SimISO statistic is identical to SimIBD, except that it also measures marker similarity between unaffected pairs. We evaluated our statistics on data simulated under different two-locus disease models, comparing our results to those obtained with several other nonparametric statistics. Use of IBD information produces dramatic increases in power over the SimAPM method, which uses only IBS information. The power of our best statistic in most cases meets or exceeds the power of the other nonparametric statistics. Furthermore, our statistics perform comparisons between all affected relative pairs within general pedigrees and are not restricted to sib pairs or nuclear families. PMID:8644751
Holdeman, J.T.; Liepins, G.E.; Murphy, B.D.; Ohr, S.Y.; Sworski, T.J.; Warner, G.E.
1989-06-01
The Generalized Escape System Simulation (GESS) program is a computerized mathematical model for dynamically simulating the performance of existing or developmental aircraft ejection seat systems. The program generates trajectory predictions with 6 degrees of freedom for the aircraft, seat/occupant, occupant alone, and seat alone systems by calculating the forces and torques imposed on these elements by seat catapults, rails, rockets, stabilization and recovery systems included in most escape system configurations. User options are provided to simulate the performance of all conventional escape system designs under most environmental conditions and aircraft attitudes or trajectories. The concept of sensitivity analysis is discussed, as is the usefulness of GESS for retrospective studies, whereby one attempts to determine the aircraft configuration at ejection from the ejection outcome. A very limited and preliminary sensitivity analysis has been done with GESS to study the way the performance of the ejection system changes with certain user-specified options or parameters. A more complete analysis would study correlations, where simultaneous correlated variations of several parameters might affect performance to an extent not predictable from the individual sensitivities. Uncertainty analysis is discussed. Even with this limited analysis, a difficulty with some simulations involving a rolling aircraft has been discovered; the code produces inconsistent trajectories. One explanation is that the integration routine is not able to deal with the stiff differential equations involved. Another possible explanation is that the coding of the coordinate transformations is faulty when large angles are involved. 7 refs., 7 tabs.
General relativistic N-body simulations in the weak field limit
NASA Astrophysics Data System (ADS)
Adamek, Julian; Daverio, David; Durrer, Ruth; Kunz, Martin
2013-11-01
We develop a formalism for general relativistic N-body simulations in the weak field regime, suitable for cosmological applications. The problem is kept tractable by retaining the metric perturbations to first order, the first derivatives to second order, and second derivatives to all orders, thus taking into account the most important nonlinear effects of Einstein gravity. It is also expected that any significant “backreaction” should appear at this order. We show that the simulation scheme is feasible in practice by implementing it for a plane-symmetric situation and running two test cases, one with only cold dark matter, and one which also includes a cosmological constant. For these plane-symmetric situations, the deviations from the usual Newtonian N-body simulations remain small and, apart from a nontrivial correction to the background, can be accurately estimated within the Newtonian framework. The correction to the background scale factor, which is a genuine backreaction effect, can be robustly obtained with our algorithm. Our numerical approach is also naturally suited for the inclusion of extra relativistic fields and thus for dark energy or modified gravity simulations.
NASA Astrophysics Data System (ADS)
Chen, Y.
2015-12-01
High resolution distributed hydrological model is regarded as to have the potential to finely simulate the catchment hydrological processes, but challenges still exist. This paper, presented a generalized catchment flood processes simulation system with Liuxihe Model, a physically-based distributed hydrological model proposed mainly for catchment flood forecasting, which is a process-based hydrological model. In this system, several cutting edge technologies have been employed, such as the supercomputing technology, PSO algorithm for parameter optimization, cloud computation, GIS and software engineering, and it is deployed on a high performance computer with free public accesses. The model structure setting up data used in this system is the open access database, so it could be used for catchments world widely. With the application of parallel computation algorithm, the model spatial resolution could be as fine as up to 100 m grid, while maintaining high computation efficiency, and could be used in large scale catchments. With the utilization of parameter optimization method, the model performance cold be improved largely. The flood events of several catchments in southern China with different drainage sizes have been simulated by this system, and the results show that this system has strong capability in simulating catchment flood events even in large river basins.
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony
2014-02-01
GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Krosel, S. M.; Bruton, W. M.
1982-01-01
A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.
NASA Technical Reports Server (NTRS)
Colarco, Peter; daSilva, Arlindo; Ginoux, Paul; Chin, Mian; Lin, S.-J.
2003-01-01
Mineral dust aerosols have radiative impacts on Earth's atmosphere, have been implicated in local and regional air quality issues, and have been identified as vectors for transporting disease pathogens and bringing mineral nutrients to terrestrial and oceanic ecosystems. We present for the first time dust simulations using online transport and meteorological analysis in the NASA Finite-Volume General Circulation Model (FVGCM). Our dust formulation follows the formulation in the offline Georgia Institute of Technology-Goddard Global Ozone Chemistry Aerosol Radiation and Transport Model (GOCART) using a topographical source for dust emissions. We compare results of the FVGCM simulations with GOCART, as well as with in situ and remotely sensed observations. Additionally, we estimate budgets of dust emission and transport into various regions.
2D simulations based on general time-dependent reciprocal relation for LFEIT.
Karadas, Mursel; Gencer, Nevzat Guneri
2015-08-01
Lorentz field electrical impedance tomography (LFEIT) is a newly proposed technique for imaging the conductivity of the tissues by measuring the electromagnetic induction under the ultrasound pressure field. In this paper, the theory and numerical simulations of the LFEIT are reported based on the general time dependent formulation. In LFEIT, a phased array ultrasound probe is used to introduce a current distribution inside a conductive body. The velocity current occurs, due to the movement of the conductive particles under a static magnetic field. In order to sense this current, a receiver coil configuration that surrounds the volume conductor is utilized. Finite Element Method (FEM) is used to carry out the simulations of LFEIT. It is shown that, LFEIT can be used to reconstruct the conductivity even up to 50% perturbation in the initial conductivity distribution. PMID:26736569
NASA Astrophysics Data System (ADS)
Shiokawa, Hotaka; Gammie, C. F.; Dolence, J.; Noble, S. C.
2013-01-01
We perform global General Relativistic Magnetohydrodynamics (GRMHD) simulations of non-radiative, magnetized disks that are initially tilted with respect to the black hole's spin axis. We run the simulations with different size and tilt angle of the tori for 2 different resolutions. We also perform radiative transfer using Monte Carlo based code that includes synchrotron emission, absorption and Compton scattering to obtain spectral energy distribution and light curves. Similar work was done by Fragile et al. (2007) and Dexter & Fragile (2012) to model the super massive black hole SgrA* with tilted accretion disks. We compare our results of fully conservative hydrodynamic code and spectra that include X-ray, with their results.
SIMPSON: A general simulation program for solid-state NMR spectroscopy
NASA Astrophysics Data System (ADS)
Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.
2011-12-01
A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tel scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple ID experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.
SIMPSON: A General Simulation Program for Solid-State NMR Spectroscopy
NASA Astrophysics Data System (ADS)
Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.
2000-12-01
A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tcl scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple 1D experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.
GENERAL-RELATIVISTIC SIMULATIONS OF THREE-DIMENSIONAL CORE-COLLAPSE SUPERNOVAE
Ott, Christian D.; Abdikamalov, Ernazar; Moesta, Philipp; Haas, Roland; Drasco, Steve; O'Connor, Evan P.; Reisswig, Christian; Meakin, Casey A.; Schnetter, Erik
2013-05-10
We study the three-dimensional (3D) hydrodynamics of the post-core-bounce phase of the collapse of a 27 M{sub Sun} star and pay special attention to the development of the standing accretion shock instability (SASI) and neutrino-driven convection. To this end, we perform 3D general-relativistic simulations with a three-species neutrino leakage scheme. The leakage scheme captures the essential aspects of neutrino cooling, heating, and lepton number exchange as predicted by radiation-hydrodynamics simulations. The 27 M{sub Sun} progenitor was studied in 2D by Mueller et al., who observed strong growth of the SASI while neutrino-driven convection was suppressed. In our 3D simulations, neutrino-driven convection grows from numerical perturbations imposed by our Cartesian grid. It becomes the dominant instability and leads to large-scale non-oscillatory deformations of the shock front. These will result in strongly aspherical explosions without the need for large-scale SASI shock oscillations. Low-l-mode SASI oscillations are present in our models, but saturate at small amplitudes that decrease with increasing neutrino heating and vigor of convection. Our results, in agreement with simpler 3D Newtonian simulations, suggest that once neutrino-driven convection is started, it is likely to become the dominant instability in 3D. Whether it is the primary instability after bounce will ultimately depend on the physical seed perturbations present in the cores of massive stars. The gravitational wave signal, which we extract and analyze for the first time from 3D general-relativistic models, will serve as an observational probe of the postbounce dynamics and, in combination with neutrinos, may allow us to determine the primary hydrodynamic instability.
NASA Astrophysics Data System (ADS)
Barnes, J. R.; Pollack, J. B.; Haberle, R. M.; Leovy, C. B.; Zurek, R. W.; Lee, H.; Schaeffer, J.
1993-02-01
A large set of experiments performed with the NASA Ames Mars General Circulation Model is analyzed to determine the properties, structure, and dynamics of the simulated transient baroclinic eddies. There is strong transient baroclinic eddy activity in the extratropics of the Northern Hemisphere during the northern autumn, winter, and spring seasons. The eddy activity remains strong for very large dust loadings, though it shifts northward. The eastward propagating eddies are characterized by zonal wavenumbers of 1-4 and periods of about 2-10 days. The properties of the GCM baroclinic eddies in the northern extratropics are compared in detail with analogous properties inferred from Viking Lander meteorology observations.
Menin, O H; Martinez, A S; Costa, A M
2016-05-01
A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present.
An in-flight simulation of lateral control nonlinearities. [for general aviation aircraft
NASA Technical Reports Server (NTRS)
Ellis, D. R.; Tilak, N. W.
1975-01-01
An in-flight simulation program was conducted to explore, in a generalized way, the influence of spoiler-type roll-control nonlinearities on handling qualities. The roll responses studied typically featured a dead zone or very small effectiveness for small control inputs, a very high effectiveness for mid-range deflections, and low effectiveness again for large inputs. A linear force gradient with no detectable breakout force was provided. Given otherwise good handling characteristics, it was found that moderate nonlinearities of the types tested might yield acceptable roll control, but the best level of handling qualities is obtained with linear, aileron-like control.
NASA Astrophysics Data System (ADS)
Aschaffenburg, Daniel J.; Williams, Michael R. C.; Schmuttenmaer, Charles A.
2016-05-01
Terahertz time-domain spectroscopic polarimetry has been used to measure the polarization state of all spectral components in a broadband THz pulse upon transmission through generalized anisotropic media consisting of two-dimensional arrays of lithographically defined Archimedean spirals. The technique allows a full determination of the frequency-dependent, complex-valued transmission matrix and eigenpolarizations of the spiral arrays. Measurements were made on a series of spiral array orientations. The frequency-dependent transmission matrix elements as well as the eigenpolarizations were determined, and the eigenpolarizations were found be to elliptically corotating, as expected from their symmetry. Numerical simulations are in quantitative agreement with measured spectra.
A general concurrent algorithm for plasma particle-in-cell simulation codes
NASA Technical Reports Server (NTRS)
Liewer, Paulett C.; Decyk, Viktor K.
1989-01-01
The general concurrent particle-in-cell (GCPIC) algorithm has been used to implement an electrostatic particle-in-cell code on a 32-node hypercube parallel computer. The GCPIC algorithm decomposes the PIC code by dividing the particle simulation physical domain into subdomains that are equal in number to the number of processors; all subdomains will accordingly possess approximately equal numbers of particles. The portion of the code which updates particle positions and velocities is nearly 100 percent efficient when the number of particles increases linearly with that of hypercube processors.
Menin, O H; Martinez, A S; Costa, A M
2016-05-01
A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present. PMID:26943902
General Relativistic Simulations of Magnetized Plasmas around Merging Supermassive Black Holes
NASA Astrophysics Data System (ADS)
Giacomazzo, Bruno; Baker, John G.; Miller, M. Coleman; Reynolds, Christopher S.; van Meter, James R.
2012-06-01
Coalescing supermassive black hole binaries are produced by the mergers of galaxies and are the most powerful sources of gravitational waves accessible to space-based gravitational observatories. Some such mergers may occur in the presence of matter and magnetic fields and hence generate an electromagnetic counterpart. In this Letter, we present the first general relativistic simulations of magnetized plasma around merging supermassive black holes using the general relativistic magnetohydrodynamic code Whisky. By considering different magnetic field strengths, going from non-magnetically dominated to magnetically dominated regimes, we explore how magnetic fields affect the dynamics of the plasma and the possible emission of electromagnetic signals. In particular, we observe a total amplification of the magnetic field of ~2 orders of magnitude, which is driven by the accretion onto the binary and that leads to much stronger electromagnetic signals, more than a factor of 104 larger than comparable calculations done in the force-free regime where such amplifications are not possible.
General Relativistic Simulations of Magnetized Plasmas around Merging Supermassive Black Holes
NASA Astrophysics Data System (ADS)
Giacomazzo, Bruno; Baker, John; Miller, M. Coleman; Reynolds, Christopher; van Meter, James
2012-03-01
Coalescing supermassive black hole binaries are produced by the mergers of galaxies and they are among the most powerful sources of gravitational waves that can be detected by space gravitational observatories. In many cases it is believed that the merger of supermassive black holes may happen in presence of matter and magnetic fields and in this case the gravitational wave signal may be accompanied by an electro-magnetic counterpart. We present the first general relativistic simulations of a magnetized plasma around merging supermassive black holes using the general relativistic magnetohydrodynamic code Whisky. By considering different magnetic field strengths, going from non-magnetically dominated to magnetically dominated regimes, we explore how magnetic fields affect the dynamics of the plasma and the possible emission of electromagnetic signals.
NASA Astrophysics Data System (ADS)
Weller, Robert A.
1999-06-01
This paper describes a suite of computational tools for general-purpose ion-solid calculations, which has been implemented in the platform-independent computational environment Mathematica®. Although originally developed for medium energy work (beam energies < 300 keV), they are suitable for general, classical, non-relativistic calculations. Routines are available for stopping power, Rutherford and Lenz-Jensen (screened) cross sections, sputtering yields, small-angle multiple scattering, and back-scattering-spectrum simulation and analysis. Also included are a full range of supporting functions, as well as easily accessible atomic mass and other data on all the stable isotopes in the periodic table. The functions use common calling protocols, recognize elements and isotopes by symbolic names and, wherever possible, return symbolic results for symbolic inputs, thereby facilitating further computation. A new paradigm for the representation of backscattering spectra is introduced.
Ramirez, A; Pasyanos, M; Franz, G A
2004-09-17
The Stochastic Engine (SE) is a data driven computer simulation tool for predicting the characteristics of complex systems. The SE integrates accurate simulators with the Monte Carlo Markov Chain (MCMC) approach (a stochastic inverse technique) to identify alternative models that are consistent with available data and ranks these alternatives according to their probabilities. Implementation of the SE is currently cumbersome owing to the need to customize the pre-processing and processing steps that are required for a specific application. This project widens the applicability of the Stochastic Engine by generalizing some aspects of the method (i.e. model-to-data transformation types, configuration, model representation). We have generalized several of the transformations that are necessary to match the observations to proposed models. These transformations are sufficiently general not to pertain to any single application. This approach provides a framework that increases the efficiency of the SE implementation. The overall goal is to reduce response time and make the approach as ''plug-and-play'' as possible, and will result in the rapid accumulation of new data types for a host of both earth science and non-earth science problems. When adapting the SE approach to a specific application, there are various pre-processing and processing steps that are typically needed to run a specific problem. Many of these steps are common to a wide variety of specific applications. Here we list and describe several data transformations that are common to a variety of subsurface inverse problems. A subset of these steps have been developed in a generalized form such that they could be used with little or no modifications in a wide variety of specific applications. This work was funded by the LDRD Program (tracking number 04-ERD-083).
Broderick, Avery E.; McKinney, Jonathan C. E-mail: jmckinne@stanford.ed
2010-12-10
It is now possible to compare global three-dimensional general relativistic magnetohydrodynamic (GRMHD) jet formation simulations directly to multi-wavelength polarized VLBI observations of the pc-scale structure of active galactic nucleus (AGN) jets. Unlike the jet emission, which requires post hoc modeling of the nonthermal electrons, the Faraday rotation measures (RMs) depend primarily upon simulated quantities and thus provide a direct way to confront simulations with observations. We compute RM distributions of a three-dimensional global GRMHD jet formation simulation, extrapolated in a self-consistent manner to {approx}10 pc scales, and explore the dependence upon model and observational parameters, emphasizing the signatures of structures generic to the theory of MHD jets. With typical parameters, we find that it is possible to reproduce the observed magnitudes and many of the structures found in AGN jet RMs, including the presence of transverse RM gradients. In our simulations, the RMs are generated in the circum-jet material, hydrodynamically a smooth extension of the jet itself, containing ordered toroidally dominated magnetic fields. This results in a particular bilateral morphology that is unlikely to arise due to Faraday rotation in distant foreground clouds. However, critical to efforts to probe the Faraday screen will be resolving the transverse jet structure. Therefore, the RMs of radio cores may not be reliable indicators of the properties of the rotating medium. Finally, we are able to constrain the particle content of the jet, finding that at pc scales AGN jets are electromagnetically dominated, with roughly 2% of the comoving energy in nonthermal leptons and much less in baryons.
A Novel Approach for Modeling Chemical Reaction in Generalized Fluid System Simulation Program
NASA Technical Reports Server (NTRS)
Sozen, Mehmet; Majumdar, Alok
2002-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a computer code developed at NASA Marshall Space Flight Center for analyzing steady state and transient flow rates, pressures, temperatures, and concentrations in a complex flow network. The code, which performs system level simulation, can handle compressible and incompressible flows as well as phase change and mixture thermodynamics. Thermodynamic and thermophysical property programs, GASP, WASP and GASPAK provide the necessary data for fluids such as helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, water, a hydrogen, isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, several refrigerants, nitrogen trifluoride and ammonia. The program which was developed out of need for an easy to use system level simulation tool for complex flow networks, has been used for the following purposes to name a few: Space Shuttle Main Engine (SSME) High Pressure Oxidizer Turbopump Secondary Flow Circuits, Axial Thrust Balance of the Fastrac Engine Turbopump, Pressurized Propellant Feed System for the Propulsion Test Article at Stennis Space Center, X-34 Main Propulsion System, X-33 Reaction Control System and Thermal Protection System, and International Space Station Environmental Control and Life Support System design. There has been an increasing demand for implementing a combustion simulation capability into GFSSP in order to increase its system level simulation capability of a liquid rocket propulsion system starting from the propellant tanks up to the thruster nozzle for spacecraft as well as launch vehicles. The present work was undertaken for addressing this need. The chemical equilibrium equations derived from the second law of thermodynamics and the energy conservation equation derived from the first law of thermodynamics are solved simultaneously by a Newton-Raphson method. The numerical scheme was implemented as a User
NASA Astrophysics Data System (ADS)
Guzewich, Scott D.; Toigo, Anthony D.; Richardson, Mark I.; Newman, Claire E.; Talaat, Elsayed R.; Waugh, Darryn W.; McConnochie, Timothy H.
2013-05-01
Limb-scanning observations with the Mars Climate Sounder and Thermal Emission Spectrometer (TES) have identified discrete layers of enhanced dust opacity well above the boundary layer and a mean vertical structure of dust opacity very different from the expectation of well-mixed dust in the lowest 1-2 scale heights. To assess the impact of this vertical dust opacity profile on atmospheric properties, we developed a TES limb-scan observation-based three-dimensional and time-evolving dust climatology for use in forcing general circulation models (GCMs). We use this to force the MarsWRF GCM and compare with simulations that use a well-mixed (Conrath-ν) vertical dust profile and Mars Climate Database version 4 (MCD) horizontal distribution dust opacity forcing function. We find that simulated temperatures using the TES-derived forcing yield a 1.18 standard deviation closer match to TES temperature retrievals than a MarsWRF simulation using MCD forcing. The climatological forcing yields significant changes to many large-scale features of the simulated atmosphere. Notably the high-latitude westerly jet speeds are 10-20 m/s higher, polar warming collar temperatures are 20-30 K warmer near northern winter solstice and tilted more strongly poleward, the middle and lower atmospheric meridional circulations are partially decoupled, the migrating diurnal tide exhibits destructive interference and is weakened by 50% outside of equinox, and the southern hemisphere wave number 1 stationary wave is strengthened by up to 4 K (45%). We find the vertical dust distribution is an important factor for Martian lower and middle atmospheric thermal structure and circulation that cannot be neglected in analysis and simulation of the Martian atmosphere.
Numerical simulation of the general circulation of the atmosphere of Titan.
Hourdin, F; Talagrand, O; Sadourny, R; Courtin, R; Gautier, D; McKay, C P
1995-10-01
The atmospheric circulation of Titan is investigated with a general circulation model. The representation of the large-scale dynamics is based on a grid point model developed and used at Laboratoire de Météorologie Dynamique for climate studies. The code also includes an accurate representation of radiative heating and cooling by molecular gases and haze as well as a parametrization of the vertical turbulent mixing of momentum and potential temperature. Long-term simulations of the atmospheric circulation are presented. Starting from a state of rest, the model spontaneously produces a strong superrotation with prograde equatorial winds (i.e., in the same sense as the assumed rotation of the solid body) increasing from the surface to reach 100 m sec-1 near the 1-mbar pressure level. Those equatorial winds are in very good agreement with some indirect observations, especially those of the 1989 occultation of Star 28-Sgr by Titan. On the other hand, the model simulates latitudinal temperature contrasts in the stratosphere that are significantly weaker than those observed by Voyager 1 which, we suggest, may be partly due to the nonrepresentation of the spatial and temporal variations of the abundances of molecular species and haze. We present diagnostics of the simulated atmospheric circulation underlying the importance of the seasonal cycle and a tentative explanation for the creation and maintenance of the atmospheric superrotation based on a careful angular momentum budget. PMID:11538593
NASA Astrophysics Data System (ADS)
Zhang, Liang; Cheng, Lü; Kiet, Tieu; Zhao, Xing; Pei, Lin-Qing; Guillaume, Michal
2015-08-01
Molecular dynamics (MD) simulations are performed to investigate the effects of stress on generalized stacking fault (GSF) energy of three fcc metals (Cu, Al, and Ni). The simulation model is deformed by uniaxial tension or compression in each of [111], [11-2], and [1-10] directions, respectively, before shifting the lattice to calculate the GSF curve. Simulation results show that the values of unstable stacking fault energy (γusf), stable stacking fault energy (γsf), and unstable twin fault energy (γutf) of the three elements can change with the preloaded tensile or compressive stress in different directions. The ratio of γsf/γusf, which is related to the energy barrier for full dislocation nucleation, and the ratio of γutf/γusf, which is related to the energy barrier for twinning formation are plotted each as a function of the preloading stress. The results of this study reveal that the stress state can change the energy barrier of defect nucleation in the crystal lattice, and thereby can play an important role in the deformation mechanism of nanocrystalline material. Project supported by Australia Research Council Discovery Projects (Grant No. DP130103973). L. Zhang, X. Zhao and L. Q. Pei were financially supported by the China Scholarship Council (CSC).
Martian atmospheric gravity waves simulated by a high-resolution general circulation model
NASA Astrophysics Data System (ADS)
Kuroda, Takeshi; Yiǧit, Erdal; Medvedev, Alexander S.; Hartogh, Paul
2016-07-01
Gravity waves (GWs) significantly affect temperature and wind fields in the Martian middle and upper atmosphere. They are also one of the observational targets of the MAVEN mission. We report on the first simulations with a high-resolution general circulation model (GCM) and present a global distributions of small-scale GWs in the Martian atmosphere. The simulated GW-induced temperature variances are in a good agreement with available radio occultation data in the lower atmosphere between 10 and 30 km. For the northern winter solstice, the model reveals a latitudinal asymmetry with stronger wave generation in the winter hemisphere and two distinctive sources of GWs: mountainous regions and the meandering winter polar jet. Orographic GWs are filtered upon propagating upward, and the mesosphere is primarily dominated by harmonics with faster horizontal phase velocities. Wave fluxes are directed mainly against the local wind. GW dissipation in the upper mesosphere generates a body force per unit mass of tens of m s^{-1} per Martian solar day (sol^{-1}), which tends to close the simulated jets. The results represent a realistic surrogate for missing observations, which can be used for constraining GW parameterizations and validating GCMs.
NASA Astrophysics Data System (ADS)
Marston, Brad; Fox-Kemper, Baylor; Skitka, Joe
Sub-grid turbulence models for planetary boundary layers are typically constructed additively, starting with local flow properties and including non-local (KPP) or higher order (Mellor-Yamada) parameters until a desired level of predictive capacity is achieved or a manageable threshold of complexity is surpassed. Such approaches are necessarily limited in general circumstances, like global circulation models, by their being optimized for particular flow phenomena. By using direct statistical simulation (DSS) that is based upon expansion in equal-time cumulants we offer the prospect of a turbulence model and an investigative tool that is equally applicable to all flow types and able to take advantage of the wealth of nonlocal information in any flow. We investigate the feasibility of a second-order closure (CE2) by performing simulations of the ocean boundary layer in a quasi-linear approximation for which CE2 is exact. As oceanographic examples, wind-driven Langmuir turbulence and thermal convection are studied by comparison of the statistics of quasi-linear and fully nonlinear simulations. We also characterize the computational advantages and physical uncertainties of CE2 defined on a reduced basis determined via proper orthogonal decomposition (POD) of the flow fields. Supported in part by NSF DMR-1306806.
A simulation study of control and display requirements for zero-experience general aviation pilots
NASA Technical Reports Server (NTRS)
Stewart, Eric C.
1993-01-01
The purpose of this simulation study was to define the basic human factor requirements for operating an airplane in all weather conditions. The basic human factors requirements are defined as those for an operator who is a complete novice for airplane operations but who is assumed to have automobile driving experience. These operators thus have had no piloting experience or training of any kind. The human factor requirements are developed for a practical task which includes all of the basic maneuvers required to go from one airport to another airport in limited visibility conditions. The task was quite demanding including following a precise path with climbing and descending turns while simultaneously changing airspeed. The ultimate goal of this research is to increase the utility of general aviation airplanes - that is, to make them a practical mode of transportation for a much larger segment of the general population. This can be accomplished by reducing the training and proficiency requirements of pilots while improving the level of safety. It is believed that advanced technologies such as fly-by-wire (or light), and head-up pictorial displays can be of much greater benefit to the general aviation pilot than to the full-time, professional pilot.
NASA Astrophysics Data System (ADS)
Vial, Jessica; Osborn, Tim J.
2012-07-01
An assessment of six coupled Atmosphere-Ocean General Circulation Models (AOGCMs) is undertaken in order to evaluate their ability in simulating winter atmospheric blocking highs in the northern hemisphere. The poor representation of atmospheric blocking in climate models is a long-standing problem (e.g. D'Andrea et al. in Clim Dyn 4:385-407, 1998), and despite considerable effort in model development, there is only a moderate improvement in blocking simulation. A modified version of the Tibaldi and Molteni (in Tellus A 42:343-365, 1990) blocking index is applied to daily averaged 500 hPa geopotential fields, from the ERA-40 reanalysis and as simulated by the climate models, during the winter periods from 1957 to 1999. The two preferred regions of blocking development, in the Euro-Atlantic and North Pacific, are relatively well captured by most of the models. However, the prominent error in blocking simulations consists of an underestimation of the total frequency of blocking episodes over both regions. A more detailed analysis revealed that this error was due to an insufficient number of medium spells and long-lasting episodes, and a shift in blocking lifetime distributions towards shorter blocks in the Euro-Atlantic sector. In the Pacific, results are more diverse; the models are equally likely to overestimate or underestimate the frequency at different spell lengths. Blocking spatial signatures are relatively well simulated in the Euro-Atlantic sector, while errors in the intensity and geographical location of the blocks emerge in the Pacific. The impact of models' systematic errors on blocking simulation has also been analysed. The time-mean atmospheric circulation biases affect the frequency of blocking episodes, and the maximum event duration in the Euro-Atlantic region, while they sometimes cause geographical mislocations in the Pacific sector. The analysis of the systematic error in time-variability has revealed a negative relationship between the high
McCabe, G.J.; Dettinger, M.D.
1995-01-01
General circulation model (GCM) simulations of atmospheric circulation are more reliable than GCM simulations of temperature and precipitation. In this study, temporal correlations between 700 hPa height anomalies simulated winter precipitation at eight locations in the conterminous United States are compared with corresponding correlations in observations. The objectives are to 1) characterize the relations between atmospheric circulation and winter precipitation simulated by the GFDL, GCM for selected locations in the conterminous USA, ii) determine whether these relations are similar to those found in observations of the actual climate system, and iii) determine if GFDL-simulated precipitation is forced by the same circulation patterns as in the real atmosphere. -from Authors
Venus atmosphere simulated by a high-resolution general circulation model
NASA Astrophysics Data System (ADS)
Sugimoto, Norihiko
2016-07-01
An atmospheric general circulation model (AGCM) for Venus on the basis of AFES (AGCM For the Earth Simulator) have been developed (e.g., Sugimoto et al., 2014a) and a very high-resolution simulation is performed. The highest resolution of the model is T319L120; 960 times 480 horizontal grids (grid intervals are about 40 km) with 120 vertical layers (layer intervals are about 1 km). In the model, the atmosphere is dry and forced by the solar heating with the diurnal and semi-diurnal components. The infrared radiative process is simplified by adopting Newtonian cooling approximation. The temperature is relaxed to a prescribed horizontally uniform temperature distribution, in which a layer with almost neutral static stability observed in the Venus atmosphere presents. A fast zonal wind in a solid-body rotation is given as the initial state. Starting from this idealized superrotation, the model atmosphere reaches a quasi-equilibrium state within 1 Earth year and this state is stably maintained for more than 10 Earth years. The zonal-mean zonal flow with weak midlatitude jets has almost constant velocity of 120 m/s in latitudes between 45°S and 45°N at the cloud top levels, which agrees very well with observations. In the cloud layer, baroclinic waves develop continuously at midlatitudes and generate Rossby-type waves at the cloud top (Sugimoto et al., 2014b). At the polar region, warm polar vortex zonally surrounded by a cold latitude band (cold collar) is well reproduced (Ando et al., 2016). As for horizontal kinetic energy spectra, divergent component is broadly (k>10) larger than rotational component compared with that on Earth (Kashimura et al., in preparation). Finally, recent results for thermal tides and small-scale waves will be shown in the presentation. Sugimoto, N. et al. (2014a), Baroclinic modes in the Venus atmosphere simulated by GCM, Journal of Geophysical Research: Planets, Vol. 119, p1950-1968. Sugimoto, N. et al. (2014b), Waves in a Venus general
NASA Technical Reports Server (NTRS)
DiSalvo, Roberto; Deaconu, Stelu; Majumdar, Alok
2006-01-01
One of the goals of this program was to develop the experimental and analytical/computational tools required to predict the flow of non-Newtonian fluids through the various system components of a propulsion system: pipes, valves, pumps etc. To achieve this goal we selected to augment the capabilities of NASA's Generalized Fluid System Simulation Program (GFSSP) software. GFSSP is a general-purpose computer program designed to calculate steady state and transient pressure and flow distributions in a complex fluid network. While the current version of the GFSSP code is able to handle various systems components the implicit assumption in the code is that the fluids in the system are Newtonian. To extend the capability of the code to non-Newtonian fluids, such as silica gelled fuels and oxidizers, modifications to the momentum equations of the code have been performed. We have successfully implemented in GFSSP flow equations for fluids with power law behavior. The implementation of the power law fluid behavior into the GFSSP code depends on knowledge of the two fluid coefficients, n and K. The determination of these parameters for the silica gels used in this program was performed experimentally. The n and K parameters for silica water gels were determined experimentally at CFDRC's Special Projects Laboratory, with a constant shear rate capillary viscometer. Batches of 8:1 (by weight) water-silica gel were mixed using CFDRC s 10-gallon gelled propellant mixer. Prior to testing the gel was allowed to rest in the rheometer tank for at least twelve hours to ensure that the delicate structure of the gel had sufficient time to reform. During the tests silica gel was pressure fed and discharged through stainless steel pipes ranging from 1", to 36", in length and three diameters; 0.0237", 0.032", and 0.047". The data collected in these tests included pressure at tube entrance and volumetric flowrate. From these data the uncorrected shear rate, shear stress, residence time
NASA Technical Reports Server (NTRS)
Stone, P. H.; Quirr, W. J.; Chow, S.
1977-01-01
Results are presented for a study directed to evaluate the ability of the global general circulation model of the Goddard Institute for Space Studies (GISS) in simulating seasonal differences as related to an experiment simulating the July climatology which parallels the January simulation presented by Somerville et al. (1974). The July and January simulations are compared with each other and with climatological data on seasonal changes, mainly for the Northern Hemisphere troposphere. The comparison shows that the model-generated energy cycle, distribution of winds, temperature, humidity and pressure, dynamical transports, diabatic heating, evaporation, precipitation and cloud cover are all realistic for the Northern Hemisphere troposphere in July. The model's simulation of seasonal differences is generally quite realistic since the systematic quantitative errors do not affect the simulation of relative changes, to first order. Defects that could seriously bias the model's performance in particular climate experiments are identified and discussed.
Strong scaling of general-purpose molecular dynamics simulations on GPUs
NASA Astrophysics Data System (ADS)
Glaser, Jens; Nguyen, Trung Dac; Anderson, Joshua A.; Lui, Pak; Spiga, Filippo; Millan, Jaime A.; Morse, David C.; Glotzer, Sharon C.
2015-07-01
We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, 2013). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al., 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5 ×.
NASA Technical Reports Server (NTRS)
Kaul, Upender K. (Inventor)
2009-01-01
Modeling and simulation of free and forced structural vibrations is essential to an overall structural health monitoring capability. In the various embodiments, a first principles finite-difference approach is adopted in modeling a structural subsystem such as a mechanical gear by solving elastodynamic equations in generalized curvilinear coordinates. Such a capability to generate a dynamic structural response is widely applicable in a variety of structural health monitoring systems. This capability (1) will lead to an understanding of the dynamic behavior of a structural system and hence its improved design, (2) will generate a sufficiently large space of normal and damage solutions that can be used by machine learning algorithms to detect anomalous system behavior and achieve a system design optimization and (3) will lead to an optimal sensor placement strategy, based on the identification of local stress maxima all over the domain.
NASA Astrophysics Data System (ADS)
Raskin, Cody; Owen, J. Michael
2016-11-01
We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.
Ouari, Kamel; Rekioua, Toufik; Ouhrouche, Mohand
2014-01-01
In order to make a wind power generation truly cost-effective and reliable, an advanced control techniques must be used. In this paper, we develop a new control strategy, using nonlinear generalized predictive control (NGPC) approach, for DFIG-based wind turbine. The proposed control law is based on two points: NGPC-based torque-current control loop generating the rotor reference voltage and NGPC-based speed control loop that provides the torque reference. In order to enhance the robustness of the controller, a disturbance observer is designed to estimate the aerodynamic torque which is considered as an unknown perturbation. Finally, a real-time simulation is carried out to illustrate the performance of the proposed controller.
Large-eddy simulation of airflow and heat transfer in a general ward of hospital
NASA Astrophysics Data System (ADS)
Hasan, Md. Farhad; Himika, Taasnim Ahmed; Molla, Md. Mamun
2016-07-01
In this paper, a very popular alternative computational technique, the Lattice Boltzmann Method (LBM) has been used for Large-Eddy Simulation (LES) of airflow and heat transfer in general ward of hospital. Different Reynolds numbers have been used to study the airflow pattern. In LES, Smagorinsky turbulence model has been considered and a discussion has been conducted in brief. A code validation has been performed comparing the present results with benchmark results for lid-driven cavity problem and the results are found to agree very well. LBM is demonstrated through simulation in forced convection inside hospital ward with six beds with a partition in the middle, which acted like a wall. Changes in average rate of heat transfer in terms of average Nusselt numbers have also been recorded in tabular format and necessary comparison has been showed. It was found that partition narrowed the path for airflow and once the air overcame this barrier, it got free space and turbulence appeared. For higher turbulence, the average rate of heat transfer increased and patients near the turbulence zone released maximum heat and felt more comfortable.
Wildman, Jack; Repiščák, Peter; Paterson, Martin J; Galbraith, Ian
2016-08-01
We describe a general scheme to obtain force-field parameters for classical molecular dynamics simulations of conjugated polymers. We identify a computationally inexpensive methodology for calculation of accurate intermonomer dihedral potentials and partial charges. Our findings indicate that the use of a two-step methodology of geometry optimization and single-point energy calculations using DFT methods produces potentials which compare favorably to high level theory calculation. We also report the effects of varying the conjugated backbone length and alkyl side-chain lengths on the dihedral profiles and partial charge distributions and determine the existence of converged lengths above which convergence is achieved in the force-field parameter sets. We thus determine which calculations are required for accurate parametrization and the scope of a given parameter set for variations to a given molecule. We perform simulations of long oligomers of dioctylfluorene and hexylthiophene in explicit solvent and find peristence lengths and end-length distributions consistent with experimental values. PMID:27397762
Comparing four approaches to generalized redirected walking: simulation and live user data.
Hodgson, Eric; Bachmann, Eric
2013-04-01
Redirected walking algorithms imperceptibly rotate a virtual scene and scale movements to guide users of immersive virtual environment systems away from tracking area boundaries. These distortions ideally permit users to explore large and potentially unbounded virtual worlds while walking naturally through a physically limited space. Estimates of the physical space required to perform effective redirected walking have been based largely on the ability of humans to perceive the distortions introduced by redirected walking and have not examined the impact the overall steering strategy used. This work compares four generalized redirected walking algorithms, including Steer-to-Center, Steer-to-Orbit, Steer-to-Multiple-Targets and Steer-to-Multiple+Center. Two experiments are presented based on simulated navigation as well as live-user navigation carried out in a large immersive virtual environment facility. Simulations were conducted with both synthetic paths and previously-logged user data. Primary comparison metrics include mean and maximum distances from the tracking area center for each algorithm, number of wall contacts, and mean rates of redirection. Results indicated that Steer-to-Center out-performed all other algorithms relative to these metrics. Steer-to-Orbit also performed well in some circumstances.
Eimre, Kristjan; Aabloo, Alvo; Parviainen, Stefan Djurabekova, Flyura; Zadin, Vahur
2015-07-21
Strong field electron emission from a nanoscale tip can cause a temperature rise at the tip apex due to Joule heating. This becomes particularly important when the current value grows rapidly, as in the pre-breakdown (the electrostatic discharge) condition, which may occur near metal surfaces operating under high electric fields. The high temperatures introduce uncertainties in calculations of the current values when using the Fowler–Nordheim equation, since the thermionic component in such conditions cannot be neglected. In this paper, we analyze the field electron emission currents as the function of the applied electric field, given by both the conventional Fowler–Nordheim field emission and the recently developed generalized thermal field emission formalisms. We also compare the results in two limits: discrete (atomistic simulations) and continuum (finite element calculations). The discrepancies of both implementations and their effect on final results are discussed. In both approaches, the electric field, electron emission currents, and Joule heating processes are simulated concurrently and self-consistently. We show that the conventional Fowler–Nordheim equation results in significant underestimation of electron emission currents. We also show that Fowler–Nordheim plots used to estimate the field enhancement factor may lead to significant overestimation of this parameter especially in the range of relatively low electric fields.
2016-01-01
We describe a general scheme to obtain force-field parameters for classical molecular dynamics simulations of conjugated polymers. We identify a computationally inexpensive methodology for calculation of accurate intermonomer dihedral potentials and partial charges. Our findings indicate that the use of a two-step methodology of geometry optimization and single-point energy calculations using DFT methods produces potentials which compare favorably to high level theory calculation. We also report the effects of varying the conjugated backbone length and alkyl side-chain lengths on the dihedral profiles and partial charge distributions and determine the existence of converged lengths above which convergence is achieved in the force-field parameter sets. We thus determine which calculations are required for accurate parametrization and the scope of a given parameter set for variations to a given molecule. We perform simulations of long oligomers of dioctylfluorene and hexylthiophene in explicit solvent and find peristence lengths and end-length distributions consistent with experimental values. PMID:27397762
Elucidating the general principles of cell adhesion with a coarse-grained simulation model.
Chen, Jiawen; Xie, Zhong-Ru; Wu, Yinghao
2016-01-01
Cell adhesion plays an indispensable role in coordinating physiological functions in multicellular organisms. During this process, specific types of cell adhesion molecules interact with each other from the opposite sides of neighboring cells. Following this trans-interaction, many cell adhesion molecules further aggregate into clusters through cis interactions. Beyond the molecule level, adhesion can be affected by multiple cellular factors due to the complexity of membrane microenvironments, including its interplay with cell signaling. However, despite tremendous advances in experimental developments, little is understood about the general principles of cell adhesion and its functional impacts. Here a mesoscopic simulation method is developed to tackle this problem. We illustrated that specific spatial patterns of membrane protein clustering are originated from different geometrical arrangements of their binding interfaces, while the size of clusters is closely regulated by molecular flexibility. Different scenarios of cooperation between trans and cis interactions of cell adhesion molecules were further tested. Additionally, impacts of membrane environments on cell adhesion were evaluated, such as the presence of a cytoskeletal meshwork, the membrane tension and the size effect of different membrane proteins on cell surfaces. Finally, by simultaneously simulating adhesion and oligomerization of signaling receptors, we found that the interplay between these two systems can be either positive or negative, closely depending on the spatial and temporal patterns of their molecular interactions. Therefore, our computational model pave the way for understanding the molecular mechanisms of cell adhesion and its biological functions in regulating cell signaling pathways.
NASA Astrophysics Data System (ADS)
Eimre, Kristjan; Parviainen, Stefan; Aabloo, Alvo; Djurabekova, Flyura; Zadin, Vahur
2015-07-01
Strong field electron emission from a nanoscale tip can cause a temperature rise at the tip apex due to Joule heating. This becomes particularly important when the current value grows rapidly, as in the pre-breakdown (the electrostatic discharge) condition, which may occur near metal surfaces operating under high electric fields. The high temperatures introduce uncertainties in calculations of the current values when using the Fowler-Nordheim equation, since the thermionic component in such conditions cannot be neglected. In this paper, we analyze the field electron emission currents as the function of the applied electric field, given by both the conventional Fowler-Nordheim field emission and the recently developed generalized thermal field emission formalisms. We also compare the results in two limits: discrete (atomistic simulations) and continuum (finite element calculations). The discrepancies of both implementations and their effect on final results are discussed. In both approaches, the electric field, electron emission currents, and Joule heating processes are simulated concurrently and self-consistently. We show that the conventional Fowler-Nordheim equation results in significant underestimation of electron emission currents. We also show that Fowler-Nordheim plots used to estimate the field enhancement factor may lead to significant overestimation of this parameter especially in the range of relatively low electric fields.
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2004-05-06
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands that the GCM parameterizations of unresolved processes, in particular, should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provided that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by a realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be tested in the same framework. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the U.S. Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM.
NASA Astrophysics Data System (ADS)
Endrizzi, Andrea; Ciolfi, Riccardo; Giacomazzo, Bruno; Kastaun, Wolfgang; Kawamura, Takumu
2016-03-01
We present new results of fully general relativistic magnetohydrodynamic (GRMHD) simulations of binary neutron star (BNS) mergers performed with the Whisky code. All the models use a piecewise polytropic approximation of the APR4 equation of state (EOS) for cold matter, together with a ''hybrid'' part to incorporate thermal effects during the evolution. We consider both equal and unequal-mass models, with total masses such that either a supramassive NS or a black hole (BH) is formed after merger. Each model is evolved with and without a magnetic field initially confined to the stellar interior. We present the different gravitational wave (GW) signals as well as a detailed description of the matter dynamics (magnetic field evolution, ejected mass, post-merger remnant properties, disk mass). Our new simulations provide a further important step in the understanding of these GW sources and their possible connection with the engine of short gamma-ray bursts (both in the ``standard'' and in the ``time-reversal'' scenarios) and with other electromagnetic counterparts.
NASA Astrophysics Data System (ADS)
Endrizzi, A.; Ciolfi, R.; Giacomazzo, B.; Kastaun, W.; Kawamura, T.
2016-08-01
We present new results of fully general relativistic magnetohydrodynamic simulations of binary neutron star (BNS) mergers performed with the Whisky code. All the models use a piecewise polytropic approximation of the APR4 equation of state for cold matter, together with a ‘hybrid’ part to incorporate thermal effects during the evolution. We consider both equal and unequal-mass models, with total masses such that either a supramassive NS or a black hole is formed after merger. Each model is evolved with and without a magnetic field initially confined to the stellar interior. We present the different gravitational wave (GW) signals as well as a detailed description of the matter dynamics (magnetic field evolution, ejected mass, post-merger remnant/disk properties). Our simulations provide new insights into BNS mergers, the associated GW emission and the possible connection with the engine of short gamma-ray bursts (both in the ‘standard’ and in the ‘time-reversal’ scenarios) and other electromagnetic counterparts.
Robinson, Brian S; Song, Dong; Berger, Theodore W
2014-01-01
This paper presents a methodology to estimate a learning rule that governs activity-dependent plasticity from behaviorally recorded spiking events. To demonstrate this framework, we simulate a probabilistic spiking neuron with spike-timing-dependent plasticity (STDP) and estimate all model parameters from the simulated spiking data. In the neuron model, output spiking activity is generated by the combination of noise, feedback from the output, and an input-feedforward component whose magnitude is modulated by synaptic weight. The synaptic weight is calculated with STDP with the following features: (1) weight change based on the relative timing of input-output spike pairs, (2) prolonged plasticity induction, and (3) considerations for system stability. Estimation of all model parameters is achieved iteratively by formulating the model as a generalized linear model with Volterra kernels and basis function expansion. Successful estimation of all model parameters in this study demonstrates the feasibility of this approach for in-vivo experimental studies. Furthermore, the consideration of system stability and prolonged plasticity induction enhances the ability to capture how STDP affects a neural population's signal transformation properties over a realistic time course. Plasticity characterization with this estimation method could yield insights into functional implications of STDP and be incorporated into a cortical prosthesis.
NASA Astrophysics Data System (ADS)
Hoormann, Janie Katherine
While Albert Einstein's theory of General Relativity (GR) has been tested extensively in our solar system, it is just beginning to be tested in the strong gravitational fields that surround black holes. As a way to study the behavior of gravity in these extreme environments, I have used and added to a ray-tracing code that simulates the X-ray emission from the accretion disks surrounding black holes. In particular, the observational channels which can be simulated include the thermal and reflected spectra, polarization, and reverberation signatures. These calculations can be performed assuming GR as well as four alternative spacetimes. These results can be used to see if it is possible to determine if observations can test the No-Hair theorem of GR which states that stationary, astrophysical black holes are only described by their mass and spin. Although it proves difficult to distinguish between theories of gravity, it is possible to exclude a large portion of the possible deviations from GR using observations of rapidly spinning stellar mass black holes such as Cygnus X-1. The ray-tracing simulations can furthermore be used to study the inner regions of black hole accretion flows. I examined the dependence of X-ray reverberation observations on the ionization of the disk photosphere. My results show that X-ray reverberation and X-ray polarization provides a powerful tool to constrain the geometry of accretion disks which are too small to be imaged directly. The second part of my thesis describes the work on the balloon-borne X-Calibur hard X-ray polarimetry mission and on the space-borne PolSTAR polarimeter concept.
TOUGH2: A general-purpose numerical simulator for multiphase nonisothermal flows
Pruess, K.
1991-06-01
Numerical simulators for multiphase fluid and heat flows in permeable media have been under development at Lawrence Berkeley Laboratory for more than 10 yr. Real geofluids contain noncondensible gases and dissolved solids in addition to water, and the desire to model such `compositional` systems led to the development of a flexible multicomponent, multiphase simulation architecture known as MULKOM. The design of MULKOM was based on the recognition that the mass-and energy-balance equations for multiphase fluid and heat flows in multicomponent systems have the same mathematical form, regardless of the number and nature of fluid components and phases present. Application of MULKOM to different fluid mixtures, such as water and air, or water, oil, and gas, is possible by means of appropriate `equation-of-state` (EOS) modules, which provide all thermophysical and transport parameters of the fluid mixture and the permeable medium as a function of a suitable set of primary thermodynamic variables. Investigations of thermal and hydrologic effects from emplacement of heat-generating nuclear wastes into partially water-saturated formations prompted the development and release of a specialized version of MULKOM for nonisothermal flow of water and air, named TOUGH. TOUGH is an acronym for `transport of unsaturated groundwater and heat` and is also an allusion to the tuff formations at Yucca Mountain, Nevada. The TOUGH2 code is intended to supersede TOUGH. It offers all the capabilities of TOUGH and includes a considerably more general subset of MULKOM modules with added capabilities. The paper briefly describes the simulation methodology and user features.
General Fluid System Simulation Program to Model Secondary Flows in Turbomachinery
NASA Technical Reports Server (NTRS)
Majumdar, Alok K.; Van Hoosier, Katherine P.
1995-01-01
The complexity and variety of turbomachinery flow circuits created a need for a general fluid system simulation program for test data anomaly resolution as well as design review. The objective of the paper is to present a computer program that has been developed to support Marshall Space Flight Center's turbomachinery internal flow analysis efforts. The computer program solves for the mass. energy and species conservation equation at each node and flow rate equation at each branch of the network by a novel numerical procedure which is a combination of both Newton-Ralphson and successive substitution method and uses a thermodynamic property program for computing real gas properties. A generalized, robust, modular, and 'user-friendly' computer program has been developed to model internal flow rates, pressures, temperatures, concentrations of gas mixtures and axial thrusts. The program can be used for any network for compressible and incompressible flows, choked flow, change of phase and gaseous mixturecs. The code has been validated by comparing the predictions with Space Shuttle Main Engine test data.
Generalized SIMD algorithm for efficient EM-PIC simulations on modern CPUs
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo; Decyk, Viktor; Mori, Warren; Silva, Luis
2012-10-01
There are several relevant plasma physics scenarios where highly nonlinear and kinetic processes dominate. Further understanding of these scenarios is generally explored through relativistic particle-in-cell codes such as OSIRIS [1], but this algorithm is computationally intensive, and efficient use high end parallel HPC systems, exploring all levels of parallelism available, is required. In particular, most modern CPUs include a single-instruction-multiple-data (SIMD) vector unit that can significantly speed up the calculations. In this work we present a generalized PIC-SIMD algorithm that is shown to work efficiently with different CPU (AMD, Intel, IBM) and vector unit types (2-8 way, single/double). Details on the algorithm will be given, including the vectorization strategy and memory access. We will also present performance results for the various hardware variants analyzed, focusing on floating point efficiency. Finally, we will discuss the applicability of this type of algorithm for EM-PIC simulations on GPGPU architectures [2]. [4pt] [1] R. A. Fonseca et al., LNCS 2331, 342, (2002)[0pt] [2] V. K. Decyk, T. V. Singh; Comput. Phys. Commun. 182, 641-648 (2011)
NASA Technical Reports Server (NTRS)
Dehghani, Navid; Tankenson, Michael
2006-01-01
This viewgraph presentation reviews the architectural description of the Mission Data Processing and Control System (MPCS). MPCS is an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is designed with these factors (1) Enabling plug and play architecture (2) MPCS has strong inheritance from GDS components that have been developed for other Flight Projects (MER, MRO, DAWN, MSAP), and are currently being used in operations and ATLO, and (3) MPCS components are Java-based, platform independent, and are designed to consume and produce XML-formatted data
KMCLib: A general framework for lattice kinetic Monte Carlo (KMC) simulations
NASA Astrophysics Data System (ADS)
Leetmaa, Mikael; Skorodumova, Natalia V.
2014-09-01
KMCLib is a general framework for lattice kinetic Monte Carlo (KMC) simulations. The program can handle simulations of the diffusion and reaction of millions of particles in one, two, or three dimensions, and is designed to be easily extended and customized by the user to allow for the development of complex custom KMC models for specific systems without having to modify the core functionality of the program. Analysis modules and on-the-fly elementary step diffusion rate calculations can be implemented as plugins following a well-defined API. The plugin modules are loosely coupled to the core KMCLib program via the Python scripting language. KMCLib is written as a Python module with a backend C++ library. After initial compilation of the backend library KMCLib is used as a Python module; input to the program is given as a Python script executed using a standard Python interpreter. We give a detailed description of the features and implementation of the code and demonstrate its scaling behavior and parallel performance with a simple one-dimensional A-B-C lattice KMC model and a more complex three-dimensional lattice KMC model of oxygen-vacancy diffusion in a fluorite structured metal oxide. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Catalogue identifier: AESZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 49 064 No. of bytes in distributed program, including test data, etc.: 1 575 172 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer that can run a C++ compiler and a Python interpreter. Operating system: Tested on Ubuntu 12
KMCLib: A general framework for lattice kinetic Monte Carlo (KMC) simulations
NASA Astrophysics Data System (ADS)
Leetmaa, Mikael; Skorodumova, Natalia V.
2014-09-01
KMCLib is a general framework for lattice kinetic Monte Carlo (KMC) simulations. The program can handle simulations of the diffusion and reaction of millions of particles in one, two, or three dimensions, and is designed to be easily extended and customized by the user to allow for the development of complex custom KMC models for specific systems without having to modify the core functionality of the program. Analysis modules and on-the-fly elementary step diffusion rate calculations can be implemented as plugins following a well-defined API. The plugin modules are loosely coupled to the core KMCLib program via the Python scripting language. KMCLib is written as a Python module with a backend C++ library. After initial compilation of the backend library KMCLib is used as a Python module; input to the program is given as a Python script executed using a standard Python interpreter. We give a detailed description of the features and implementation of the code and demonstrate its scaling behavior and parallel performance with a simple one-dimensional A-B-C lattice KMC model and a more complex three-dimensional lattice KMC model of oxygen-vacancy diffusion in a fluorite structured metal oxide. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Catalogue identifier: AESZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 49 064 No. of bytes in distributed program, including test data, etc.: 1 575 172 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer that can run a C++ compiler and a Python interpreter. Operating system: Tested on Ubuntu 12
Cao, Liaoran; Lv, Chao; Yang, Wei
2013-08-13
DNA base extrusion is a crucial component of many biomolecular processes. Elucidating how bases are selectively extruded from the interiors of double-strand DNAs is pivotal to accurately understanding and efficiently sampling this general type of conformational transitions. In this work, the on-the-path random walk (OTPRW) method, which is the first generalized ensemble sampling scheme designed for finite-temperature-string path optimizations, was improved and applied to obtain the minimum free energy path (MFEP) and the free energy profile of a classical B-DNA major-groove base extrusion pathway. Along the MFEP, an intermediate state and the corresponding transition state were located and characterized. The MFEP result suggests that a base-plane-elongation event rather than the commonly focused base-flipping event is dominant in the transition state formation portion of the pathway; and the energetic penalty at the transition state is mainly introduced by the stretching of the Watson-Crick base pair. Moreover to facilitate the essential base-plane-elongation dynamics, the surrounding environment of the flipped base needs to be intimately involved. Further taking the advantage of the extended-dynamics nature of the OTPRW Hamiltonian, an equilibrium generalized ensemble simulation was performed along the optimized path; and based on the collected samples, several base-flipping (opening) angle collective variables were evaluated. In consistence with the MFEP result, the collective variable analysis result reveals that none of these commonly employed flipping (opening) angles alone can adequately represent the base extrusion pathway, especially in the pre-transition-state portion. As further revealed by the collective variable analysis, the base-pairing partner of the extrusion target undergoes a series of in-plane rotations to facilitate the base-plane-elongation dynamics. A base-plane rotation angle is identified to be a possible reaction coordinate to represent
A general hybrid radiation transport scheme for star formation simulations on an adaptive grid
Klassen, Mikhail; Pudritz, Ralph E.; Kuiper, Rolf; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars
2014-12-10
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
Theoretical analysis and simulations of the generalized Lotka-Volterra model
NASA Astrophysics Data System (ADS)
Malcai, Ofer; Biham, Ofer; Richmond, Peter; Solomon, Sorin
2002-09-01
The dynamics of generalized Lotka-Volterra systems is studied by theoretical techniques and computer simulations. These systems describe the time evolution of the wealth distribution of individuals in a society, as well as of the market values of firms in the stock market. The individual wealths or market values are given by a set of time dependent variables wi, i=1,...,N. The equations include a stochastic autocatalytic term (representing investments), a drift term (representing social security payments), and a time dependent saturation term (due to the finite size of the economy). The wi's turn out to exhibit a power-law distribution of the form P(w)~w-1-α. It is shown analytically that the exponent α can be expressed as a function of one parameter, which is the ratio between the constant drift component (social security) and the fluctuating component (investments). This result provides a link between the lower and upper cutoffs of this distribution, namely, between the resources available to the poorest and those available to the richest in a given society. The value of α is found to be insensitive to variations in the saturation term, which represent the expansion or contraction of the economy. The results are of much relevance to empirical studies that show that the distribution of the individual wealth in different countries during different periods in the 20th century has followed a power-law distribution with 1<α<2.
Theoretical analysis and simulations of the generalized Lotka-Volterra model.
Malcai, Ofer; Biham, Ofer; Richmond, Peter; Solomon, Sorin
2002-09-01
The dynamics of generalized Lotka-Volterra systems is studied by theoretical techniques and computer simulations. These systems describe the time evolution of the wealth distribution of individuals in a society, as well as of the market values of firms in the stock market. The individual wealths or market values are given by a set of time dependent variables w(i), i=1,...,N. The equations include a stochastic autocatalytic term (representing investments), a drift term (representing social security payments), and a time dependent saturation term (due to the finite size of the economy). The w(i)'s turn out to exhibit a power-law distribution of the form P(w) approximately w(-1-alpha). It is shown analytically that the exponent alpha can be expressed as a function of one parameter, which is the ratio between the constant drift component (social security) and the fluctuating component (investments). This result provides a link between the lower and upper cutoffs of this distribution, namely, between the resources available to the poorest and those available to the richest in a given society. The value of alpha is found to be insensitive to variations in the saturation term, which represent the expansion or contraction of the economy. The results are of much relevance to empirical studies that show that the distribution of the individual wealth in different countries during different periods in the 20th century has followed a power-law distribution with 1
General circulation model simulations of recent cooling in the east-central United States
NASA Astrophysics Data System (ADS)
Robinson, Walter A.; Reudy, Reto; Hansen, James E.
2002-12-01
In ensembles of retrospective general circulation model (GCM) simulations, surface temperatures in the east-central United States cool between 1951 and 1997. This cooling, which is broadly consistent with observed surface temperatures, is present in GCM experiments driven by observed time varying sea-surface temperatures (SSTs) in the tropical Pacific, whether or not increasing greenhouse gases and other time varying climate forcings are included. Here we focus on ensembles with fixed radiative forcing and with observed varying SST in different regions. In these experiments the trend and variability in east-central U.S. surface temperatures are tied to tropical Pacific SSTs. Warm tropical Pacific SSTs cool U.S. temperatures by diminishing solar heating through an increase in cloud cover. These associations are embedded within a year-round response to warm tropical Pacific SST that features tropospheric warming throughout the tropics and regions of tropospheric cooling in midlatitudes. Precipitable water vapor over the Gulf of Mexico and the Caribbean and the tropospheric thermal gradient across the Gulf Coast of the United States increase when the tropical Pacific is warm. In observations, recent warming in the tropical Pacific is also associated with increased precipitable water over the southeast United States. The observed cooling in the east-central United States, relative to the rest of the globe, is accompanied by increased cloud cover, though year-to-year variations in cloud cover, U.S. surface temperatures, and tropical Pacific SST are less tightly coupled in observations than in the GCM.
A General Hybrid Radiation Transport Scheme for Star Formation Simulations on an Adaptive Grid
NASA Astrophysics Data System (ADS)
Klassen, Mikhail; Kuiper, Rolf; Pudritz, Ralph E.; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars
2014-12-01
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
GENERAL RELATIVISTIC SIMULATIONS OF ACCRETION INDUCED COLLAPSE OF NEUTRON STARS TO BLACK HOLES
Giacomazzo, Bruno; Perna, Rosalba
2012-10-10
Neutron stars (NSs) in the astrophysical universe are often surrounded by accretion disks. Accretion of matter onto an NS may increase its mass above the maximum value allowed by its equation of state, inducing its collapse to a black hole (BH). Here we study this process for the first time, in three-dimensions, and in full general relativity. By considering three initial NS configurations, each with and without a surrounding disk (of mass {approx}7% M{sub NS}), we investigate the effect of the accretion disk on the dynamics of the collapse and its imprint on both the gravitational wave (GW) and electromagnetic (EM) signals that can be emitted by these sources. We show in particular that, even if the GW signal is similar for the accretion induced collapse (AIC) and the collapse of an NS in vacuum (and detectable only for Galactic sources), the EM counterpart could allow us to discriminate between these two types of events. In fact, our simulations show that, while the collapse of an NS in vacuum leaves no appreciable baryonic matter outside the event horizon, an AIC is followed by a phase of rapid accretion of the surviving disk onto the newly formed BH. The post-collapse accretion rates, on the order of {approx}10{sup -2} M{sub Sun} s{sup -1}, make these events tantalizing candidates as engines of short gamma-ray bursts.
Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach
NASA Astrophysics Data System (ADS)
Alba, Vincenzo
By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.
FRETmatrix: a general methodology for the simulation and analysis of FRET in nucleic acids
Preus, Søren; Kilså, Kristine; Miannay, Francois-Alexandre; Albinsson, Bo; Wilhelmsson, L. Marcus
2013-01-01
Förster resonance energy transfer (FRET) is a technique commonly used to unravel the structure and conformational changes of biomolecules being vital for all living organisms. Typically, FRET is performed using dyes attached externally to nucleic acids through a linker that complicates quantitative interpretation of experiments because of dye diffusion and reorientation. Here, we report a versatile, general methodology for the simulation and analysis of FRET in nucleic acids, and demonstrate its particular power for modelling FRET between probes possessing limited diffusional and rotational freedom, such as our recently developed nucleobase analogue FRET pairs (base–base FRET). These probes are positioned inside the DNA/RNA structures as a replacement for one of the natural bases, thus, providing unique control of their position and orientation and the advantage of reporting from inside sites of interest. In demonstration studies, not requiring molecular dynamics modelling, we obtain previously inaccessible insight into the orientation and nanosecond dynamics of the bases inside double-stranded DNA, and we reconstruct high resolution 3D structures of kinked DNA. The reported methodology is accompanied by a freely available software package, FRETmatrix, for the design and analysis of FRET in nucleic acid containing systems. PMID:22977181
Generalized Langevin models of molecular dynamics simulations with applications to ion channels
NASA Astrophysics Data System (ADS)
Gordon, Dan; Krishnamurthy, Vikram; Chung, Shin-Ho
2009-10-01
We present a new methodology, which combines molecular dynamics and stochastic dynamics, for modeling the permeation of ions across biological ion channels. Using molecular dynamics, a free energy profile is determined for the ion(s) in the channel, and the distribution of random and frictional forces is measured over discrete segments of the ion channel. The parameters thus determined are used in stochastic dynamics simulations based on the nonlinear generalized Langevin equation. We first provide the theoretical basis of this procedure, which we refer to as "distributional molecular dynamics," and detail the methods for estimating the parameters from molecular dynamics to be used in stochastic dynamics. We test the technique by applying it to study the dynamics of ion permeation across the gramicidin pore. Given the known difficulty in modeling the conduction of ions in gramicidin using classical molecular dynamics, there is a degree of uncertainty regarding the validity of the MD-derived potential of mean force (PMF) for gramicidin. Using our techniques and systematically changing the PMF, we are able to reverse engineer a modified PMF which gives a current-voltage curve closely matching experimental results.
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies.
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. PMID:20674066
Carr, J.R. . Dept. of Geological Sciences); Mao, Nai-hsien )
1992-01-01
Disjunctive kriging has been compared previously to multigaussian kriging and indicator cokriging for estimation of cumulative distribution functions; it has yet to be compared extensively to probability kriging. Herein, disjunctive kriging and generalized probability kriging are applied to one real and one simulated data set and compared for estimation of the cumulative distribution functions. Generalized probability kriging is an extension, based on generalized cokriging theory, of simple probability kriging for the estimation of the indicator and uniform transforms at each cutoff, Z{sub k}. The disjunctive kriging and the generalized probability kriging give similar results for simulated data of normal distribution, but differ considerably for real data set with non-normal distribution.
Seasonal variation of Titan's atmospheric structure simulated by a general circulation model.
Tokano, T; Neubauer, F M; Laube, M; McKay, C P
1999-01-01
The seasonal variation of Titan's atmospheric structure with emphasis on the stratosphere is simulated by a three-dimensional general circulation model. The model includes the transport of haze particles by the circulation. The likely pattern of meridional circulation is reconstructed by a comparison of simulated and observed haze and temperature distribution. The GCM produces a weak zonal circulation with a small latitudinal temperature gradient, in conflict with observation. The direct reason is found to be the excessive meridional circulation. Under uniformly distributed opacity sources, the model predicts a pair of symmetric Hadley cells near the equinox and a single global cell with the rising branch in the summer hemisphere below about z = 230 km and a thermally indirect cell above the direct cell near the solstice. The interhemispheric circulation transports haze particles from the summer to the winter hemisphere, causing a maximum haze opacity contrast near the solstice and a smaller contrast near the equinox, contrary to observation. On the other, if the GCM is run under modified cooling rate in order to account for the enhancement in nitrites and some hydrocarbons in the northern hemisphere near the vernal equinox, the meridional cell at the equinox becomes a single cell with rising motions in the autumn hemisphere. A more realistic haze opacity distribution can be reproduced at the equinox. However, a pure transport effect (without particle growth by microphysics, etc.) would not be able to cause the observed discontinuity of the global haze opacity distribution at any location. The stratospheric temperature asymmetry can be explained by a combination of asymmetric radiative heating rates and adiabatic heating due to vertical motion within the thermally indirect cell. A seasonal variation of haze particle number density is unlikely to be responsible for this asymmetry. It is likely that a thermally indirect cell covers the upper portion of the main haze
McKinney, Jonathan C.; Tchekhovskoy, Alexander; Blandford, Roger D.
2012-04-26
Black hole (BH) accretion flows and jets are qualitatively affected by the presence of ordered magnetic fields. We study fully three-dimensional global general relativistic magnetohydrodynamic (MHD) simulations of radially extended and thick (height H to cylindrical radius R ratio of |H/R| {approx} 0.2-1) accretion flows around BHs with various dimensionless spins (a/M, with BH mass M) and with initially toroidally-dominated ({phi}-directed) and poloidally-dominated (R-z directed) magnetic fields. Firstly, for toroidal field models and BHs with high enough |a/M|, coherent large-scale (i.e. >> H) dipolar poloidal magnetic flux patches emerge, thread the BH, and generate transient relativistic jets. Secondly, for poloidal field models, poloidal magnetic flux readily accretes through the disk from large radii and builds-up to a natural saturation point near the BH. While models with |H/R| {approx} 1 and |a/M| {le} 0.5 do not launch jets due to quenching by mass infall, for sufficiently high |a/M| or low |H/R| the polar magnetic field compresses the inflow into a geometrically thin highly non-axisymmetric 'magnetically choked accretion flow' (MCAF) within which the standard linear magneto-rotational instability is suppressed. The condition of a highly-magnetized state over most of the horizon is optimal for the Blandford-Znajek mechanism that generates persistent relativistic jets with and 100% efficiency for |a/M| {approx}> 0.9. A magnetic Rayleigh-Taylor and Kelvin-Helmholtz unstable magnetospheric interface forms between the compressed inflow and bulging jet magnetosphere, which drives a new jet-disk oscillation (JDO) type of quasi-periodic oscillation (QPO) mechanism. The high-frequency QPO has spherical harmonic |m| = 1 mode period of {tau} {approx} 70GM/c{sup 3} for a/M {approx} 0.9 with coherence quality factors Q {approx}> 10. Overall, our models are qualitatively distinct from most prior MHD simulations (typically, |H/R| << 1 and poloidal flux is limited by
Huang, Chen-Hsi; Marian, Jaime
2016-10-26
We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term 'ABVI', incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found. PMID:27541350
NASA Astrophysics Data System (ADS)
Huang, Chen-Hsi; Marian, Jaime
2016-10-01
We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term ‘ABVI’, incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found.
Simulating Titan's methane cycle with the TitanWRF General Circulation Model
NASA Astrophysics Data System (ADS)
Newman, Claire E.; Richardson, Mark I.; Lian, Yuan; Lee, Christopher
2016-03-01
Observations provide increasing evidence of a methane hydrological cycle on Titan. Earth-based and Cassini-based monitoring has produced data on the seasonal variation in cloud activity and location, with clouds being observed at increasingly low latitudes as Titan moved out of southern summer. Lakes are observed at high latitudes, with far larger lakes and greater areal coverage in the northern hemisphere, where some shorelines extend down as far as 50°N. Rainfall at some point in the past is suggested by the pattern of flow features on the surface at the Huygens landing site, while recent rainfall is suggested by surface change. As with the water cycle on Earth, the methane cycle on Titan is both impacted by tropospheric dynamics and likely able to impact this circulation via feedbacks. Here we use the 3D TitanWRF General Circulation Model (GCM) to simulate Titan's methane cycle. In this initial work we use a simple large-scale condensation scheme with latent heat feedbacks and a finite surface reservoir of methane, and focus on large-scale dynamical interactions between the atmospheric circulation and methane, and how these impact seasonal changes and the long term (steady state) behavior of the methane cycle. We note five major conclusions: (1) Condensation and precipitation in the model is sporadic in nature, with interannual variability in its timing and location, but tends to occur in association with both (a) frequent strong polar upwelling during spring and summer in each hemisphere, and (b) the Inter-Tropical Convergence Zone (ITCZ), a region of increased convergence and upwelling due to the seasonally shifting Hadley cells. (2) An active tropospheric methane cycle affects the stratospheric circulation, slightly weakening the stratospheric superrotation produced. (3) Latent heating feedback strongly influences surface and near-surface temperatures, narrowing the latitudinal range of the ITCZ, and changing the distribution - and generally weakening the
Limits to high-speed simulations of spiking neural networks using general-purpose computers.
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite. PMID:25309418
Limits to high-speed simulations of spiking neural networks using general-purpose computers
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite. PMID:25309418
A Generalized Fluid System Simulation Program to Model Flow Distribution in Fluid Networks
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Bailey, John W.; Schallhorn, Paul; Steadman, Todd
1998-01-01
This paper describes a general purpose computer program for analyzing steady state and transient flow in a complex network. The program is capable of modeling phase changes, compressibility, mixture thermodynamics and external body forces such as gravity and centrifugal. The program's preprocessor allows the user to interactively develop a fluid network simulation consisting of nodes and branches. Mass, energy and specie conservation equations are solved at the nodes; the momentum conservation equations are solved in the branches. The program contains subroutines for computing "real fluid" thermodynamic and thermophysical properties for 33 fluids. The fluids are: helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, parahydrogen, water, kerosene (RP-1), isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, R-11, R-12, R-22, R-32, R-123, R-124, R-125, R-134A, R-152A, nitrogen trifluoride and ammonia. The program also provides the options of using any incompressible fluid with constant density and viscosity or ideal gas. Seventeen different resistance/source options are provided for modeling momentum sources or sinks in the branches. These options include: pipe flow, flow through a restriction, non-circular duct, pipe flow with entrance and/or exit losses, thin sharp orifice, thick orifice, square edge reduction, square edge expansion, rotating annular duct, rotating radial duct, labyrinth seal, parallel plates, common fittings and valves, pump characteristics, pump power, valve with a given loss coefficient, and a Joule-Thompson device. The system of equations describing the fluid network is solved by a hybrid numerical method that is a combination of the Newton-Raphson and successive substitution methods. This paper also illustrates the application and verification of the code by comparison with Hardy Cross method for steady state flow and analytical solution for unsteady flow.
Use of a PhET Interactive Simulation in General Chemistry Laboratory: Models of the Hydrogen Atom
ERIC Educational Resources Information Center
Clark, Ted M.; Chamberlain, Julia M.
2014-01-01
An activity supporting the PhET interactive simulation, Models of the Hydrogen Atom, has been designed and used in the laboratory portion of a general chemistry course. This article describes the framework used to successfully accomplish implementation on a large scale. The activity guides students through a comparison and analysis of the six…
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Iyer, Ravishankar K.
1990-01-01
Discrete event-driven simulation makes it possible to model a computer system in detail. However, such simulation models can require a significant time to execute. This is especially true when modeling large parallel or distributed systems containing many processors and a complex communication network. One solution is to distribute the simulation over several processors. If enough parallelism is achieved, large simulation models can be efficiently executed. This study proposes a distributed simulator called DSIM which can run on various architectures. A simulated test environment is used to verify and characterize the performance of DSIM. The results of the experiments indicate that speedup is application-dependent and, in DSIM's case, is also dependent on how the simulation model is distributed among the processors. Furthermore, the experiments reveal that the communication overhead of ethernet-based distributed systems makes it difficult to achieve reasonable speedup unless the simulation model is computation bound.
Le Thiez, P.A.; Pottecher, G.; Recherche, A.
1996-11-01
This paper presents a general numerical model able to simulate both organic pollutants migration (3-phase compositional flows, mass transfer, transport) in soils and aquifers and decontamination techniques such as pumping, skimming, venting, hot venting, steam injection, surfactant injection and biodegradation. To validate the simulator, a 3-D experiment in a large pilot (25 m x 12 m x 4 m) was carried out. A total of 0.475 M{sup 3} of diesel oil was injected into the pilot, and numerous in- situ measurements were performed to determine pollutants location and concentrations within the vadose and saturated zones. Prior to the pilot test, a predictive simulation computed the extent of the contaminated zone and the oil saturations. Numerical results showed good agreement between experiment and simulation. To demonstrate the simulator abilities to improve remediation operations, a soil vapor extraction (venting) of weathered gasoline in the vadose zone under a service station was simulated. Fourteen wells were drilled on the site and extraction took nine months. The simulation closely matches the field data. Further simulations show the possibility of venting optimization for this site.
Donchev, A. G.; Galkin, N. G.; Illarionov, A. A.; Khoruzhii, O. V.; Olevanov, M. A.; Ozrin, V. D.; Subbotin, M. V.; Tarasov, V. I.
2006-01-01
We have recently introduced a quantum mechanical polarizable force field (QMPFF) fitted solely to high-level quantum mechanical data for simulations of biomolecular systems. Here, we present an improved form of the force field, QMPFF2, and apply it to simulations of liquid water. The results of the simulations show excellent agreement with a variety of experimental thermodynamic and structural data, as good or better than that provided by specialized water potentials. In particular, QMPFF2 is the only ab initio force field to accurately reproduce the anomalous temperature dependence of water density to our knowledge. The ability of the same force field to successfully simulate the properties of both organic molecules and water suggests it will be useful for simulations of proteins and protein–ligand interactions in the aqueous environment. PMID:16723394
A general spectral method for the numerical simulation of one-dimensional interacting fermions
NASA Astrophysics Data System (ADS)
Clason, Christian; von Winckel, Gregory
2012-02-01
This work introduces a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient MATLAB program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. Program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 102 No. of bytes in distributed program, including test data, etc.: 2294 Distribution format: tar.gz Programming language: MATLAB Computer: Any architecture supported by MATLAB Operating system: Any supported by MATLAB; tested under Linux (x86-64) and Mac OS X (10.6) RAM: Depends on the data Classification: 4.3, 2.2 Nature of problem: The direct numerical solution of the multi-particle one-dimensional Schrödinger equation in a quantum well is challenging due to the exponential growth in the number of degrees of freedom with increasing particles. Solution method: A nodal spectral Galerkin scheme is used where the basis functions are constructed to obey the antisymmetry relations of the fermionic wave
Axisymmetric general relativistic simulations of the accretion-induced collapse of white dwarfs
Abdikamalov, E. B.; Ott, C. D.; Rezzolla, L.; Dessart, L.; Dimmelmeier, H.; Marek, A.; Janka, H.-T.
2010-02-15
The accretion-induced collapse (AIC) of a white dwarf may lead to the formation of a protoneutron star and a collapse-driven supernova explosion. This process represents a path alternative to thermonuclear disruption of accreting white dwarfs in type Ia supernovae. In the AIC scenario, the supernova explosion energy is expected to be small and the resulting transient short-lived, making it hard to detect by electromagnetic means alone. Neutrino and gravitational-wave (GW) observations may provide crucial information necessary to reveal a potential AIC. Motivated by the need for systematic predictions of the GW signature of AIC, we present results from an extensive set of general-relativistic AIC simulations using a microphysical finite-temperature equation of state and an approximate treatment of deleptonization during collapse. Investigating a set of 114 progenitor models in axisymmetric rotational equilibrium, with a wide range of rotational configurations, temperatures and central densities, and resulting white dwarf masses, we extend previous Newtonian studies and find that the GW signal has a generic shape akin to what is known as a 'type III' signal in the literature. Despite this reduction to a single type of waveform, we show that the emitted GWs carry information that can be used to constrain the progenitor and the postbounce rotation. We discuss the detectability of the emitted GWs, showing that the signal-to-noise ratio for current or next-generation interferometer detectors could be high enough to detect such events in our Galaxy. Furthermore, we contrast the GW signals of AIC and rotating massive star iron core collapse and find that they can be distinguished, but only if the distance to the source is known and a detailed reconstruction of the GW time series from detector data is possible. Some of our AIC models form massive quasi-Keplerian accretion disks after bounce. The disk mass is very sensitive to progenitor mass and angular momentum
NASA Astrophysics Data System (ADS)
Martin, E.; Timbal, B.; Brun, E.
1996-12-01
A downscaling method was developed to simulate the seasonal snow cover of the French Alps from general circulation model outputs under various scenarios. It consists of an analogue procedure, which associates a real meteorological situation to a model output. It is based on the comparison between simulated upper air fields and meteorological analyses from the European Centre for Medium-Range Weather Forecasts. The selection uses a nearest neighbour method at a daily time-step. In a second phase, the snow cover is simulated by the snow model CROCUS at several elevations and in the different regions of the French Alps by using data from the real meteorological situations. The method is tested with real data and applied to various ARPEGE/Climat simulations: the present climate and two climate change scenarios.
Kyoda, K M; Muraki, M; Kitano, H
2000-01-01
In this paper, we report development of a generalized simulation system based on ordinary differential equations for multi-cellular organisms, and results of the analysis on a Smad signal transduction cascade. The simulator implements intra-cellular and extra-cellular molecular processes, such as protein diffusion, ligand-receptor reaction, biochemical reaction, and gene expression. It simulates the spatio-temporal patterning in various biological phenomena for the single and multi-cellular organisms. In order to demonstrate the usefulness of the simulator, we constructed a model of Drosophila's Smad signal transduction, which includes protein diffusion, biochemical reaction and gene expression. The results suggest that the presence of negative feedback mechanism in the Smad pathway functions to improve the frequency response of the cascade against changes in the signaling.
Liu, Qing; Shi, Chaowei; Yu, Lu; Zhang, Longhua; Xiong, Ying; Tian, Changlin
2015-02-13
Internal backbone dynamic motions are essential for different protein functions and occur on a wide range of time scales, from femtoseconds to seconds. Molecular dynamic (MD) simulations and nuclear magnetic resonance (NMR) spin relaxation measurements are valuable tools to gain access to fast (nanosecond) internal motions. However, there exist few reports on correlation analysis between MD and NMR relaxation data. Here, backbone relaxation measurements of (15)N-labeled SH3 (Src homology 3) domain proteins in aqueous buffer were used to generate general order parameters (S(2)) using a model-free approach. Simultaneously, 80 ns MD simulations of SH3 domain proteins in a defined hydrated box at neutral pH were conducted and the general order parameters (S(2)) were derived from the MD trajectory. Correlation analysis using the Gromos force field indicated that S(2) values from NMR relaxation measurements and MD simulations were significantly different. MD simulations were performed on models with different charge states for three histidine residues, and with different water models, which were SPC (simple point charge) water model and SPC/E (extended simple point charge) water model. S(2) parameters from MD simulations with charges for all three histidines and with the SPC/E water model correlated well with S(2) calculated from the experimental NMR relaxation measurements, in a site-specific manner.
NASA Astrophysics Data System (ADS)
Im, Wonpil; Roux, Benoît
2001-09-01
A general method has been developed to include the electrostatic reaction field in Brownian dynamics (BD) simulations of ions diffusing through complex molecular channels of arbitrary geometry. Assuming that the solvent is represented as a featureless continuum dielectric medium, a multipolar basis-set expansion is developed to express the reaction field Green's function. A reaction field matrix, which provides the coupling between generalized multipoles, is calculated only once and stored before the BD simulations. The electrostatic energy and forces are calculated at each time step by updating the generalized multipole moments. The method is closely related to the generalized solvent boundary potential [Im et al., J. Chem. Phys. 114, 2924 (2001)] which was recently developed to include the influence of distant atoms on a small region part of a large macromolecular system in molecular dynamics simulations. It is shown that the basis-set expansion is accurate and computationally inexpensive for three simple models such as a spherical ionic system, an impermeable membrane system, and a cylindrical pore system as well as a realistic system such as OmpF porin with all atomic details. The influence of the static field and the reaction field on the ion distribution and conductance in the OmpF channel is studied and discussed.
Liu, Qing; Shi, Chaowei; Yu, Lu; Zhang, Longhua; Xiong, Ying; Tian, Changlin
2015-02-13
Internal backbone dynamic motions are essential for different protein functions and occur on a wide range of time scales, from femtoseconds to seconds. Molecular dynamic (MD) simulations and nuclear magnetic resonance (NMR) spin relaxation measurements are valuable tools to gain access to fast (nanosecond) internal motions. However, there exist few reports on correlation analysis between MD and NMR relaxation data. Here, backbone relaxation measurements of {sup 15}N-labeled SH3 (Src homology 3) domain proteins in aqueous buffer were used to generate general order parameters (S{sup 2}) using a model-free approach. Simultaneously, 80 ns MD simulations of SH3 domain proteins in a defined hydrated box at neutral pH were conducted and the general order parameters (S{sup 2}) were derived from the MD trajectory. Correlation analysis using the Gromos force field indicated that S{sup 2} values from NMR relaxation measurements and MD simulations were significantly different. MD simulations were performed on models with different charge states for three histidine residues, and with different water models, which were SPC (simple point charge) water model and SPC/E (extended simple point charge) water model. S{sup 2} parameters from MD simulations with charges for all three histidines and with the SPC/E water model correlated well with S{sup 2} calculated from the experimental NMR relaxation measurements, in a site-specific manner. - Highlights: • Correlation analysis between NMR relaxation measurements and MD simulations. • General order parameter (S{sup 2}) as common reference between the two methods. • Different protein dynamics with different Histidine charge states in neutral pH. • Different protein dynamics with different water models.
Nonequilibrium and generalized-ensemble molecular dynamics simulations for amyloid fibril
Okumura, Hisashi
2015-12-31
Amyloids are insoluble and misfolded fibrous protein aggregates and associated with more than 20 serious human diseases. We perform all-atom molecular dynamics simulations of amyloid fibril assembly and disassembly.
A general kinetic-flow coupling model for FCC riser flow simulation.
Chang, S. L.
1998-05-18
A computational fluid dynamic (CFD) code has been developed for fluid catalytic cracking (FCC) riser flow simulation. Depending on the application of interest, a specific kinetic model is needed for the FCC flow simulation. This paper describes a method to determine a kinetic model based on limited pilot-scale test data. The kinetic model can then be used with the CFD code as a tool to investigate optimum operating condition ranges for a specific FCC unit.
CAML: a general framework for the development of medical simulation systems
NASA Astrophysics Data System (ADS)
Cotin, Stephane; Shaffer, David W.; Meglan, Dwight A.; Ottensmeyer, Mark P.; Berry, Patrick S.; Dawson, Steven L.
2000-08-01
Medical simulation offers the opportunity to revolutionize the training of medical personnel, from paramedics and corpsmen to physicians, allowing early learning to occur in a no-risk environment, without putting patients at risk during the professional's early learning curve. However, the complexity of the problems involved in the development of the medical training systems as well as the spectrum of scientific fields that need to be covered have been a major limiting factor to the achievement of realistic simulations. We think that success in this effort cannot occur through uncoordinated efforts among domain experts working within their own fields. Success will come through medical personnel working side by side with engineers, computer scientists and designers to develop a simulation system that is useful and relevant. As part of our overall program to develop medical simulation, we have identified a critical infrastructure technology that will enable collaboration among simulation developers. When implemented, this Common Anatomy Modeling Language (CAML) will provide a common architecture for integrating the individual components of a medical simulation system.
A Variable Resolution Stretched Grid General Circulation Model: Regional Climate Simulation
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.; Suarez, Max J.
2000-01-01
The development of and results obtained with a variable resolution stretched-grid GCM for the regional climate simulation mode, are presented. A global variable resolution stretched- grid used in the study has enhanced horizontal resolution over the U.S. as the area of interest The stretched-grid approach is an ideal tool for representing regional to global scale interaction& It is an alternative to the widely used nested grid approach introduced over a decade ago as a pioneering step in regional climate modeling. The major results of the study are presented for the successful stretched-grid GCM simulation of the anomalous climate event of the 1988 U.S. summer drought- The straightforward (with no updates) two month simulation is performed with 60 km regional resolution- The major drought fields, patterns and characteristics such as the time averaged 500 hPa heights precipitation and the low level jet over the drought area. appear to be close to the verifying analyses for the stretched-grid simulation- In other words, the stretched-grid GCM provides an efficient down-scaling over the area of interest with enhanced horizontal resolution. It is also shown that the GCM skill is sustained throughout the simulation extended to one year. The developed and tested in a simulation mode stretched-grid GCM is a viable tool for regional and subregional climate studies and applications.
Wu, Tai-Hsien; Guo, Rurng-Sheng; He, Guo-Wei; Liu, Ying-Ming; Qi, Dewei
2014-05-21
A generalized lattice-spring lattice-Boltzmann model (GLLM) is introduced by adding a three-body force in the traditional lattice-spring model. This method is able to deal with bending deformation of flexible biological bodies in fluids. The interactions between elastic solids and fluid are treated with the immersed boundary-lattice Boltzmann method. GLLM is validated by comparing the present results with the existing theoretical and simulation results. As an application of GLLM, swimming of flagellum in fluid is simulated and propulsive force as a function of driven frequency and fluid structures at various Reynolds numbers 0.15-5.1 are presented in this paper.
Li, Hongzhi; Fajer, Mikolai; Yang, Wei
2007-01-14
A potential scaling version of simulated tempering is presented to efficiently sample configuration space in a localized region. The present "simulated scaling" method is developed with a Wang-Landau type of updating scheme in order to quickly flatten the distributions in the scaling parameter lambdam space. This proposal is meaningful for a broad range of biophysical problems, in which localized sampling is required. Besides its superior capability and robustness in localized conformational sampling, this simulated scaling method can also naturally lead to efficient "alchemical" free energy predictions when dual-topology alchemical hybrid potential is applied; thereby simultaneously, both of the chemically and conformationally distinct portions of two end point chemical states can be efficiently sampled. As demonstrated in this work, the present method is also feasible for the quantum mechanical and quantum mechanical/molecular mechanical simulations.
A general spectral method for the numerical simulation of one-dimensional interacting fermions
NASA Astrophysics Data System (ADS)
Clason, Christian; von Winckel, Gregory
2012-08-01
This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical
A 0.18 μm CMOS low-power radiation sensor for asynchronous event-driven UWB wireless transmission
NASA Astrophysics Data System (ADS)
Bastianini, S.; Crepaldi, M.; Demarchi, D.; Gabrielli, A.; Lolli, M.; Margotti, A.; Villani, G.; Zhang, Z.; Zoccoli, G.
2013-12-01
The paper describes the design of a readout element, proposed as a radiation monitor, which implements an embedded sensor based on a floating-gate transistor. The paper shows the design of a microelectronic circuit composed of a sensor, an oscillator, a modulator, a transmitter and an integrated antenna. A prototype chip has recently been fabricated and tested exploiting a commercial 180 nm, four metal CMOS technology. Simulation results of the entire behavior of the circuit before submission are presented along with some measurements of the actual chip response. In addition, preliminary tests of the performance of the Ultra-Wide Band transmission via the integrated antenna are summarized. As the complete chip prototype area is less than 1 mm2, the chip fits a large variety of applications, from spot radiation monitoring systems in medicine to punctual measurements of radiation level in High-Energy Physics experiments. A sensitivity of 1 mV/rad was estimated within an absorbed dose range up to 10 krad and a total power consumption of about 165 μW.
Huebener, H; Cubasch, U; Langematz, U; Spangehl, T; Niehörster, F; Fast, I; Kunze, M
2007-08-15
Long-term transient simulations are carried out in an initial condition ensemble mode using a global coupled climate model which includes comprehensive ocean and stratosphere components. This model, which is run for the years 1860-2100, allows the investigation of the troposphere-stratosphere interactions and the importance of representing the middle atmosphere in climate-change simulations. The model simulates the present-day climate (1961-2000) realistically in the troposphere, stratosphere and ocean. The enhanced stratospheric resolution leads to the simulation of sudden stratospheric warmings; however, their frequency is underestimated by a factor of 2 with respect to observations.In projections of the future climate using the Intergovernmental Panel on Climate Change special report on emissions scenarios A2, an increased tropospheric wave forcing counteracts the radiative cooling in the middle atmosphere caused by the enhanced greenhouse gas concentration. This leads to a more dynamically active, warmer stratosphere compared with present-day simulations, and to the doubling of the number of stratospheric warmings. The associated changes in the mean zonal wind patterns lead to a southward displacement of the Northern Hemisphere storm track in the climate-change signal. PMID:17569652
Cheung, Kit; Schultz, Simon R; Luk, Wayne
2015-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
Cheung, Kit; Schultz, Simon R.; Luk, Wayne
2016-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
Simulated Asian-Australian monsoon with a spectral element atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Liu, X. Y.
2016-08-01
A low-top version of SEMANS (Spectral Element Model with Atmospheric Near Space resolved) has been used to carry out numerical simulation on characteristics of Asian- Australian Monsoon (A-AM) in the work. The simulation results are validated with ERA- Interim reanalysis dataset and precipitation data from satellite remote sensing. It's shown that the model can reproduce the major climatic features of A-AM with stronger easterly in the tropical Eastern Pacific, and a weaker northerly component in the Northern Hemisphere.The simulated precipitation rate is larger and the double ITCZ (Inter-Tropical Convergence Zone) in the tropical Eastern Pacific in the northern spring is not reproduced. A due to the absence of variation longer than a year in the bottom boundary conditions, the model cannot reproduce the relationships between the monsoon indexes and the surface air temperature in the broad area near the equator.
A study of nucleation and growth of thin films by means of computer simulation: General features
NASA Technical Reports Server (NTRS)
Salik, J.
1984-01-01
Some of the processes involved in the nucleation and growth of thin films were simulated by means of a digital computer. The simulation results were used to study the nucleation and growth kinetics resulting from the various processes. Kinetic results obtained for impingement, surface migration, impingement combined with surface migration, and with reevaporation are presented. A substantial fraction of the clusters may form directly upon impingement. Surface migration results in a decrease in cluster density, and reevaporation of atoms from the surface causes a further reduction in cluster density.
An exploratory simulation study of a head-up display for general aviation lightplanes
NASA Technical Reports Server (NTRS)
Harris, R. L., Sr.; Hewes, D. E.
1973-01-01
The concept of a simplified head-up display referred to as a landing-site indicator (LASI) for use in lightplanes is discussed. Results of a fixed-base simulation study exploring the feasibility of the LASI concept are presented in terms of measurements of pilot performance, control-activity parameters, and subjective comments of four test subjects. These subjects, all of whom had various degrees of piloting experience in this type aircraft, performed a series of simulated landings both with and without the LASI starting from different initial conditions in the final approach leg of the landing maneuver.
Two-dimensional simulation of a high-gain, generalized self-filtering, unstable resonator.
Torre, A; Petrucci, C
1997-04-20
The performance of a high-power excimer laser, generalized self-filtering, unstable resonator has been modeled by means of a numerical code. The spectral method and the Rigrod equations are basic to the numerical procedure, which is quite general because it results from an appropriate combination of independent propagation algorithms. The code can be applied to arbitrary resonator geometry and can be used to take account of gain medium inhomogeneities and instability phenomena. PMID:18253235
NASA Astrophysics Data System (ADS)
Poursina, Mohammad; Anderson, Kurt S.
2013-03-01
In this paper, a scheme for the canonical ensemble simulation of the coarse-grained articulated polymers is discussed. In this coarse-graining strategy, different subdomains of the system are considered as rigid and/or flexible bodies connected to each other via kinematic joints instead of stiff, but elastic bonds. Herein, the temperature of the simulation is controlled by a Nosé-Hoover thermostat. The dynamics of this feedback control system in the context of multibody dynamics may be represented and solved using traditional methods with computational complexity of O(n3) where n denotes the number of degrees of freedom of the system. In this paper, we extend the divide-and-conquer algorithm (DCA), and apply it to constant temperature molecular simulations. The DCA in its original form uses spatial forces to formulate the equations of motion. The Generalized-DCA applied here properly accommodates the thermostat generalized forces (from the thermostat), which control the temperature of the simulation, in the equations of motion. This algorithm can be implemented in serial and parallel with computational complexity of O(n) and O(logn), respectively.
NASA Astrophysics Data System (ADS)
Cauquoin, A.; Jean-Baptiste, P.; Risi, C.; Fourré, É.; Stenni, B.; Landais, A.
2015-10-01
The description of the hydrological cycle in Atmospheric General Circulation Models (GCMs) can be validated using water isotopes as tracers. Many GCMs now simulate the movement of the stable isotopes of water, but here we present the first GCM simulations modelling the content of natural tritium in water. These simulations were obtained using a version of the LMDZ General Circulation Model enhanced by water isotopes diagnostics, LMDZ-iso. To avoid tritium generated by nuclear bomb testing, the simulations have been evaluated against a compilation of published tritium datasets dating from before 1950, or measured recently. LMDZ-iso correctly captures the observed tritium enrichment in precipitation as oceanic air moves inland (the so-called continental effect) and the observed north-south variations due to the latitudinal dependency of the cosmogenic tritium production rate. The seasonal variability, linked to the stratospheric intrusions of air masses with higher tritium content into the troposphere, is correctly reproduced for Antarctica with a maximum in winter. LMDZ-iso reproduces the spring maximum of tritium over Europe, but underestimates it and produces a peak in winter that is not apparent in the data. This implementation of tritium in a GCM promises to provide a better constraint on: (1) the intrusions and transport of air masses from the stratosphere, and (2) the dynamics of the modelled water cycle. The method complements the existing approach of using stable water isotopes.
Cador, Charlie; Rose, Nicolas; Willem, Lander; Andraud, Mathieu
2016-01-01
Swine Influenza A Viruses (swIAVs) have been shown to persist in farrow-to-finish pig herds with repeated outbreaks in successive batches, increasing the risk for respiratory disorders in affected animals and being a threat for public health. Although the general routes of swIAV transmission (i.e. direct contact and exposure to aerosols) were clearly identified, the transmission process between batches is still not fully understood. Maternally derived antibodies (MDAs) were stressed as a possible factor favoring within-herd swIAV persistence. However, the relationship between MDAs and the global spread among the different subpopulations in the herds is still lacking. The aim of this study was therefore to understand the mechanisms induced by MDAs in relation with swIAV spread and persistence in farrow-to-finish pig herds. A metapopulation model has been developed representing the population dynamics considering two subpopulations—breeding sows and growing pigs—managed according to batch-rearing system. This model was coupled with a swIAV-specific epidemiological model, accounting for partial passive immunity protection in neonatal piglets and an immunity boost in re-infected animals. Airborne transmission was included by a between-room transmission rate related to the current prevalence of shedding pigs. Maternally derived partial immunity in piglets was found to extend the duration of the epidemics within their batch, allowing for efficient between-batch transmission and resulting in longer swIAV persistence at the herd level. These results should be taken into account in the design of control programmes for the spread and persistence of swIAV in swine herds. PMID:27662592
NASA Astrophysics Data System (ADS)
Louri, Ahmed; Major, Michael C.
1995-07-01
Research in the field of free-space optical interconnection networks has reached a point where simula-tors and other design tools are desirable for reducing development costs and for improving design time. Previously proposed methodologies have only been applicable to simple systems. Our goal was to develop a simulation methodology capable of evaluating the performance characteristics for a variety of different free-space networks under a range of different configurations and operating states. The proposed methodology operates by first establishing the optical signal powers at various locations in the network. These powers are developed through the simulation by diffraction analysis of the light propagation through the network. After this evaluation, characteristics such as bit-error rate, signal-to-noise ratio, and system bandwidth are calculated. Further, the simultaneous evaluation of this process for a set of component misalignments provides a measure of the alignment tolerance of a design. We discuss this simulation process in detail as well as provide models for different optical interconnection network components.
NASA Astrophysics Data System (ADS)
Fallahi, Arya; Kärtner, Franz
2014-12-01
We introduce a hybrid technique based on the discontinuous Galerkin time domain (DGTD) and the particle in cell (PIC) simulation methods for the analysis of interaction between light and charged particles. The DGTD algorithm is a three-dimensional, dual-field and fully explicit method for efficiently solving Maxwell equations in the time domain on unstructured grids. On the other hand, the PIC algorithm is a versatile technique for the simulation of charged particles in an electromagnetic field. This paper introduces a novel strategy for combining both methods to solve for the electron motion and field distribution when an optical beam interacts with an electron bunch in a very general geometry. The developed software offers a complete and stable numerical solution of the problem for arbitrary charge and field distributions in the time domain on unstructured grids. For this purpose, an advanced search algorithm is developed for fast calculation of field data at charge points and for later importing to the PIC simulations. In addition, we propose a field-based coupling between the two methods resulting in a stable and precise time marching scheme for both fields and charged particle motion. To benchmark the solver, some examples are numerically solved and compared with analytical solutions. Eventually, the developed software is utilized to simulate the field emission from a flat metal plate and a silicon nano-tip. In the future, we will use this technique for the simulation and design of ultrafast compact x-ray sources.
Atmospheric distribution of Kr-85 simulated with a general circulation model
NASA Technical Reports Server (NTRS)
Jacob, Daniel J.; Wofsy, Steven C.; Mcelroy, Michael B.; Prather, Michael J.
1987-01-01
A three-dimensional chemical tracer model for the troposphere is used to simulate the global distribution of Kr-85, a long-lived radioisotope released at northern midlatitudes by nuclear industry. Simulated distributions for the period 1980-1983 are in excellent agreement with data from six latitudinal profiles measured over the Atlantic. High concentrations of Kr-85 are predicted over the Arctic in winter, advected from European sources, and somewhat smaller enhancements arising from the same sources are predicted over the tropical Atlantic in summer. Latitudinal gradients are steepest in the northern tropics, with distinctly different seasonal variations over the Pacific, as compared to the Atlantic. The global inventory of Kr-85 is reconstructed for the period 1980-1983 by combining the concentrations measured over the Atlantic with the global distributions predicted by the model. The magnitude of the Soviet source is derived. The interhemispheric exchange time is calculated as 1.1 years, with little seasonal dependence.
General relativistic simulations of slowly and differentially rotating magnetized neutron stars
Etienne, Zachariah B.; Liu, Yuk Tung; Shapiro, Stuart L.
2006-08-15
We present long-term ({approx}10{sup 4}M) axisymmetric simulations of differentially rotating, magnetized neutron stars in the slow-rotation, weak magnetic field limit using a perturbative metric evolution technique. Although this approach yields results comparable to those obtained via nonperturbative (BSSN) evolution techniques, simulations performed with the perturbative metric solver require about 1/4 the computational resources at a given resolution. This computational efficiency enables us to observe and analyze the effects of magnetic braking and the magnetorotational instability (MRI) at very high resolution. Our simulations demonstrate that (1) MRI is not observed unless the fastest-growing mode wavelength is resolved by (greater-or-similar sign)10 gridpoints; (2) as resolution is improved, the MRI growth rate converges, but due to the small-scale turbulent nature of MRI, the maximum growth amplitude increases, but does not exhibit convergence, even at the highest resolution; and (3) independent of resolution, magnetic braking drives the star toward uniform rotation as energy is sapped from differential rotation by winding magnetic fields.
General Relativistic Magnetohydrodynamic Simulations of Jet Formation with a Thin Keplerian Disk
NASA Technical Reports Server (NTRS)
Mizuno, Yosuke; Nishikawa, Ken-Ichi; Koide, Shinji; Hardee, Philip; Gerald, J. Fishman
2006-01-01
We have performed several simulations of black hole systems (non-rotating, black hole spin parameter a = 0.0 and rapidly rotating, a = 0.95) with a geometrically thin Keplerian disk using the newly developed RAISHIN code. The simulation results show the formation of jets driven by the Lorentz force and the gas pressure gradient. The jets have mildly relativistic speed (greater than or equal to 0.4 c). The matter is continuously supplied from the accretion disk and the jet propagates outward until each applicable terminal simulation time (non-rotating: t/tau S = 275 and rotating: t/tau S = 200, tau s equivalent to r(sub s/c). It appears that a rotating black hole creates an additional, faster, and more collimated inner outflow (greater than or equal to 0.5 c) formed and accelerated by the twisted magnetic field resulting from frame-dragging in the black hole ergosphere. This new result indicates that jet kinematic structure depends on black hole rotation.
Developing extensible lattice-Boltzmann simulators for general-purpose graphics-processing units
Walsh, S C; Saar, M O
2011-12-21
Lattice-Boltzmann methods are versatile numerical modeling techniques capable of reproducing a wide variety of fluid-mechanical behavior. These methods are well suited to parallel implementation, particularly on the single-instruction multiple data (SIMD) parallel processing environments found in computer graphics processing units (GPUs). Although more recent programming tools dramatically improve the ease with which GPU programs can be written, the programming environment still lacks the flexibility available to more traditional CPU programs. In particular, it may be difficult to develop modular and extensible programs that require variable on-device functionality with current GPU architectures. This paper describes a process of automatic code generation that overcomes these difficulties for lattice-Boltzmann simulations. It details the development of GPU-based modules for an extensible lattice-Boltzmann simulation package - LBHydra. The performance of the automatically generated code is compared to equivalent purpose written codes for both single-phase, multiple-phase, and multiple-component flows. The flexibility of the new method is demonstrated by simulating a rising, dissolving droplet in a porous medium with user generated lattice-Boltzmann models and subroutines.
Generalized Simulation Model for a Switched-Mode Power Supply Design Course Using MATLAB/SIMULINK
ERIC Educational Resources Information Center
Liao, Wei-Hsin; Wang, Shun-Chung; Liu, Yi-Hua
2012-01-01
Switched-mode power supplies (SMPS) are becoming an essential part of many electronic systems as the industry drives toward miniaturization and energy efficiency. However, practical SMPS design courses are seldom offered. In this paper, a generalized MATLAB/SIMULINK modeling technique is first presented. A proposed practical SMPS design course at…
Simulation of the Low-Level-Jet by general circulation models
Ghan, S.J.
1996-04-01
To what degree is the low-level jet climatology and it`s impact on clouds and precipitation being captured by current general circulation models? It is hypothesised that a need for a pramaterization exists. This paper describes this parameterization need.
Gallas, Brandon D; Hillis, Stephen L
2014-10-01
Modeling and simulation are often used to understand and investigate random quantities and estimators. In 1997, Roe and Metz introduced a simulation model to validate analysis methods for the popular endpoint in reader studies to evaluate medical imaging devices, the reader-averaged area under the receiver operating characteristic (ROC) curve. Here, we generalize the notation of the model to allow more flexibility in recognition that variances of ROC ratings depend on modality and truth state. We also derive and validate equations for computing population variances and covariances for reader-averaged empirical AUC estimates under the generalized model. The equations are one-dimensional integrals that can be calculated using standard numerical integration techniques. This work provides the theoretical foundation and validation for a Java application called iRoeMetz that can simulate multireader multicase ROC studies and numerically calculate the corresponding variances and covariances of the empirical AUC. The iRoeMetz application and source code can be found at the "iMRMC" project on the google code project hosting site. These results and the application can be used by investigators to investigate ROC endpoints, validate analysis methods, and plan future studies.
Gallas, Brandon D.; Hillis, Stephen L.
2014-01-01
Abstract. Modeling and simulation are often used to understand and investigate random quantities and estimators. In 1997, Roe and Metz introduced a simulation model to validate analysis methods for the popular endpoint in reader studies to evaluate medical imaging devices, the reader-averaged area under the receiver operating characteristic (ROC) curve. Here, we generalize the notation of the model to allow more flexibility in recognition that variances of ROC ratings depend on modality and truth state. We also derive and validate equations for computing population variances and covariances for reader-averaged empirical AUC estimates under the generalized model. The equations are one-dimensional integrals that can be calculated using standard numerical integration techniques. This work provides the theoretical foundation and validation for a Java application called iRoeMetz that can simulate multireader multicase ROC studies and numerically calculate the corresponding variances and covariances of the empirical AUC. The iRoeMetz application and source code can be found at the “iMRMC” project on the google code project hosting site. These results and the application can be used by investigators to investigate ROC endpoints, validate analysis methods, and plan future studies. PMID:26158048
NASA Astrophysics Data System (ADS)
Casavant, D.; Brodsky, I.; MacDougall, G. J.
Many important details regarding magnetism in a material can be inferred from the magnetic excitation spectrum, and in this context, general calculations of the classical spinwave spectrum are often necessary. Beyond the simplest of lattices, however, it is difficult to numerically determine the full spinwave spectrum, due primarily to the non-linearity of the problem. In this talk, I will present MATLAB code, developed over the last few years at the University of Illinois, that calculates the dispersions of spinwave excitations out of an arbitrarily defined ordered spin system. The calculation assumes a standard Heisenberg exchange Hamiltonian with the incorporation of a single-ion anisotropy term which can be varied site-by-site and can also simulate the application of an applied field. An overview of the calculation method and the structure of the code will be given, with emphasis on its general applicability. Extensions to the code enable the simulation of both single-crystal and powder-averaged neutron scattering intensity patterns. As a specfic example, I will present the calculated neutron scattering spectrum for powders of CoV2O4, where good agreement between the simulated and experimental data suggests a self-consistent picture of the low-temperature magnetism.
NASA Astrophysics Data System (ADS)
Salaway, Richard N.; Zhigilei, Leonid V.
2016-07-01
The contact conductance of carbon nanotube (CNT) junctions is the key factor that controls the collective heat transfer through CNT networks or CNT-based materials. An improved understanding of the dependence of the intertube conductance on the contact structure and local environment is needed for predictive computational modeling or theoretical description of the effective thermal conductivity of CNT materials. To investigate the effect of local structure on the thermal conductance across CNT-CNT contact regions, nonequilibrium molecular dynamics (MD) simulations are performed for different intertube contact configurations (parallel fully or partially overlapping CNTs and CNTs crossing each other at different angles) and local structural environments characteristic of CNT network materials. The results of MD simulations predict a stronger CNT length dependence present over a broader range of lengths than has been previously reported and suggest that the effect of neighboring junctions on the conductance of CNT-CNT junctions is weak and only present when the CNTs that make up the junctions are within the range of direct van der Waals interaction with each other. A detailed analysis of the results obtained for a diverse range of intertube contact configurations reveals a nonlinear dependence of the conductance on the contact area (or number of interatomic intertube interactions) and suggests larger contributions to the conductance from areas of the contact where the density of interatomic intertube interactions is smaller. An empirical relation accounting for these observations and expressing the conductance of an arbitrary contact configuration through the total number of interatomic intertube interactions and the average number of interatomic intertube interactions per atom in the contact region is proposed. The empirical relation is found to provide a good quantitative description of the contact conductance for various CNT configurations investigated in the MD
D. Fix; J. Estill; L. Wong; R. Rebak
2004-05-28
Boron containing stainless steels are used in the nuclear industry for applications such as spent fuel storage, control rods and shielding. It was of interest to compare the corrosion resistance of three borated stainless steels with standard austenitic alloy materials such as type 304 and 316 stainless steels. Tests were conducted in three simulated concentrated ground waters at 90 C. Results show that the borated stainless were less resistant to corrosion than the witness austenitic materials. An acidic concentrated ground water was more aggressive than an alkaline concentrated ground water.
Estill, J C; Rebak, R B; Fix, D V; Wong, L L
2004-03-11
Boron containing stainless steels are used in the nuclear industry for applications such as spent fuel storage, control rods and shielding. It was of interest to compare the corrosion resistance of three borated stainless steels with standard austenitic alloy materials such as type 304 and 316 stainless steels. Tests were conducted in three simulated concentrated ground waters at 90 C. Results show that the borated stainless were less resistant to corrosion than the witness austenitic materials. An acidic concentrated ground water was more aggressive than an alkaline concentrated ground water.
An order n formulation for the motion simulation of general multi-rigid-body tree systems
NASA Astrophysics Data System (ADS)
Anderson, K. S.
1993-02-01
An algorithm is developed for formulating dynamical equations of motion in a highly efficient concurrent form. The algorithm allows for the determination of system state derivative values in O(n) operation overall for tree structures. When the algorithm is implemented using symbolic manipulation to develop the explicit dynamical equations of motion, and appropriate substitutions are made for the reoccurring intermediate variable (as is done with the SWIFT program), substantial reductions in operations count per integration step are realized. If the inherent concurrency of the equations is then exploited through application on a parallel processing system, previously unrealizable simulation speed is possible.
Numerical simulation of steady and unsteady flow for generalized Newtonian fluids
NASA Astrophysics Data System (ADS)
Keslerová, Radka; Trdlička, David; Řezníček, Hynek
2016-08-01
This work presents the numerical solution of laminar incompressible viscous flow in a three dimensional branching channel with circle cross section for generalized Newtonian fluids. The governing system of equations is based on the system of balance laws for mass and momentum. Numerical solution is based on cetral finite volume method using explicit Runge- Kutta time integration. In the case of unsteady computation artificial compressibility method is considered.
NASA Astrophysics Data System (ADS)
Sądowski, Aleksander; Narayan, Ramesh; Tchekhovskoy, Alexander; Abarca, David; Zhu, Yucong; McKinney, Jonathan C.
2015-02-01
We present a mean-field model that emulates the magnetic dynamo operating in magnetized accretion discs. We have implemented this model in the general relativisic radiation magnetohydrodynamic (GRRMHD) code KORAL, using results from local shearing sheet simulations of the magnetorotational instability to fix the parameters of the dynamo. With the inclusion of this dynamo, we are able to run 2D axisymmetric GRRMHD simulations of accretion discs for arbitrarily long times. The simulated discs exhibit sustained turbulence, with the poloidal and toroidal magnetic field components driven towards a state similar to that seen in 3D studies. Using this dynamo code, we present a set of long-duration global simulations of super-Eddington, optically thick discs around non-spinning and spinning black holes. Super-Eddington discs around non-rotating black holes exhibit a surprisingly large efficiency, η ≈ 0.04, independent of the accretion rate, where we measure efficiency in terms of the total energy output, both radiation and mechanical, flowing out to infinity. This value significantly exceeds the efficiency predicted by slim disc models for these accretion rates. Super-Eddington discs around spinning black holes are even more efficient, and appear to extract black hole rotational energy through a process similar to the Blandford-Znajek mechanism. All the simulated models are characterized by highly super-Eddington radiative fluxes collimated along the rotation axis. We also present a set of simulations that were designed to have Eddington or slightly sub-Eddington accretion rates (dot{M} ≲ 2dot{M}_Edd). None of these models reached a steady state. Instead, the discs collapsed as a result of runaway cooling, presumably because of a thermal instability.
Rauscher, Sarah; Neale, Chris; Pomès, Régis
2009-10-13
Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.
A general approach to develop reduced order models for simulation of solid oxide fuel cell stacks
Pan, Wenxiao; Bao, Jie; Lo, Chaomei; Lai, Canhai; Agarwal, Khushbu; Koeppel, Brian J.; Khaleel, Mohammad A.
2013-06-15
A reduced order modeling approach based on response surface techniques was developed for solid oxide fuel cell stacks. This approach creates a numerical model that can quickly compute desired performance variables of interest for a stack based on its input parameter set. The approach carefully samples the multidimensional design space based on the input parameter ranges, evaluates a detailed stack model at each of the sampled points, and performs regression for selected performance variables of interest to determine the responsive surfaces. After error analysis to ensure that sufficient accuracy is established for the response surfaces, they are then implemented in a calculator module for system-level studies. The benefit of this modeling approach is that it is sufficiently fast for integration with system modeling software and simulation of fuel cell-based power systems while still providing high fidelity information about the internal distributions of key variables. This paper describes the sampling, regression, sensitivity, error, and principal component analyses to identify the applicable methods for simulating a planar fuel cell stack.
Abla, G
2012-11-09
The Center for Simulation of Wave Interactions with Magnetohydrodynamics (SWIM) project is dedicated to conduct research on integrated multi-physics simulations. The Integrated Plasma Simulator (IPS) is a framework that was created by the SWIM team. It provides an integration infrastructure for loosely coupled component-based simulations by facilitating services for code execution coordination, computational resource management, data management, and inter-component communication. The IPS framework features improving resource utilization, implementing application-level fault tolerance, and support of the concurrent multi-tasking execution model. The General Atomics (GA) team worked closely with other team members on this contract, and conducted research in the areas of computational code monitoring, meta-data management, interactive visualization, and user interfaces. The original website to monitor SWIM activity was developed in the beginning of the project. Due to the amended requirements, the software was redesigned and a revision of the website was deployed into production in April of 2010. Throughout the duration of this project, the SWIM Monitoring Portal (http://swim.gat.com:8080/) has been a critical production tool for supporting the project's physics goals.
Hamilton, S.; Veselka, T.D.; Cirillo, R.R.
1991-01-01
Global warming control strategies which mandate stringent caps on emissions of greenhouse forcing gases can substantially alter a country's demand, production, and imports of energy products. Although there is a large degree of uncertainty when attempting to estimate the potential impact of these strategies, insights into the problem can be acquired through computer model simulations. This paper presents one method of structuring a general equilibrium model, the ENergy and Power Evaluation Program/Global Climate Change (ENPEP/GCC), to simulate changes in a country's energy supply and demand balance in response to global warming control strategies. The equilibrium model presented in this study is based on the principle of decomposition, whereby a large complex problem is divided into a number of smaller submodules. Submodules simulate energy activities and conversion processes such as electricity production. These submodules are linked together to form an energy supply and demand network. Linkages identify energy and fuel flows among various activities. Since global warming control strategies can have wide reaching effects, a complex network was constructed. The network represents all energy production, conversion, transportation, distribution, and utilization activities. The structure of the network depicts interdependencies within and across economic sectors and was constructed such that energy prices and demand responses can be simulated. Global warming control alternatives represented in the network include: (1) conservation measures through increased efficiency; and (2) substitution of fuels that have high greenhouse gas emission rates with fuels that have lower emission rates. 6 refs., 4 figs., 4 tabs.
A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations
Ouyang, G; Jandhyala, V; Champagne, N; Sharpe, R; Fasenfest, B J; Rockway, J D
2004-12-14
An Asymptotic Wave Expansion (AWE) technique is implemented into the EIGER computational electromagnetics code. The AWE fast frequency sweep is formed by separating the components of the integral equations by frequency dependence, then using this information to find a rational function approximation of the results. The standard AWE method is generalized to work for several integral equations, including the EFIE for conductors and the PMCHWT for dielectrics. The method is also expanded to work for two types of coupled circuit-EM problems as well as lumped load circuit elements. After a simple bisecting adaptive sweep algorithm is developed, dramatic speed improvements are seen for several example problems.
Christmas: an event driven by our hormones?
Ludwig, M
2011-12-01
No other event in the Christian calendar has such a deep impact on our behaviour as the annual event called Christmas. Christmas is not just 'Christmas Day'; indeed, it is a long developmental rhythm with a period of almost exactly 365 days. Here, I describe the neuronal and hormonal changes and their effects on our behaviour during the preparation and the execution of the event(1) .
Generalize aerodynamic coefficient table storage, checkout and interpolation for aircraft simulation
NASA Technical Reports Server (NTRS)
Neuman, F.; Warner, N.
1973-01-01
The set of programs described has been used for rapidly introducing, checking out and very efficiently using aerodynamic tables in complex aircraft simulations on the IBM 360. The preprocessor program reads in tables with different names and dimensions and stores then on disc storage according to the specified dimensions. The tables are read in from IBM cards in a format which is convenient to reduce the data from the original graphs. During table processing, new auxiliary tables are generated which are required for table cataloging and for efficient interpolation. In addition, DIMENSION statements for the tables as well as READ statements are punched so that they may be used in other programs for readout of the data from disc without chance of programming errors. A quick data checking graphical output for all tables is provided in a separate program.
CO adsorption over Pd nanoparticles: A general framework for IR simulations on nanoparticles
NASA Astrophysics Data System (ADS)
Zeinalipour-Yazdi, Constantinos D.; Willock, David J.; Thomas, Liam; Wilson, Karen; Lee, Adam F.
2016-04-01
CO vibrational spectra over catalytic nanoparticles under high coverages/pressures are discussed from a DFT perspective. Hybrid B3LYP and PBE DFT calculations of CO chemisorbed over Pd4 and Pd13 nanoclusters, and a 1.1 nm Pd38 nanoparticle, have been performed in order to simulate the corresponding coverage dependent infrared (IR) absorption spectra, and hence provide a quantitative foundation for the interpretation of experimental IR spectra of CO over Pd nanocatalysts. B3LYP simulated IR intensities are used to quantify site occupation numbers through comparison with experimental DRIFTS spectra, allowing an atomistic model of CO surface coverage to be created. DFT adsorption energetics for low CO coverage (θ → 0) suggest the CO binding strength follows the order hollow > bridge > linear, even for dispersion-corrected functionals for sub-nanometre Pd nanoclusters. For a Pd38 nanoparticle, hollow and bridge-bound are energetically similar (hollow ≈ bridge > atop). It is well known that this ordering has not been found at the high coverages used experimentally, wherein atop CO has a much higher population than observed over Pd(111), confirmed by our DRIFTS spectra for Pd nanoparticles supported on a KIT-6 silica, and hence site populations were calculated through a comparison of DFT and spectroscopic data. At high CO coverage (θ = 1), all three adsorbed CO species co-exist on Pd38, and their interdiffusion is thermally feasible at STP. Under such high surface coverages, DFT predicts that bridge-bound CO chains are thermodynamically stable and isoenergetic to an entirely hollow bound Pd/CO system. The Pd38 nanoparticle undergoes a linear (3.5%), isotropic expansion with increasing CO coverage, accompanied by 63 and 30 cm- 1 blue-shifts of hollow and linear bound CO respectively.
General Relativistic Hydrodynamic Simulation of Accretion Flow from a Stellar Tidal Disruption
NASA Astrophysics Data System (ADS)
Shiokawa, Hotaka; Krolik, Julian H.; Cheng, Roseanne M.; Piran, Tsvi; Noble, Scott C.
2015-05-01
We study how the matter dispersed when a supermassive black hole tidally disrupts a star joins an accretion flow. Combining a relativistic hydrodynamic simulation of the stellar disruption with a relativistic hydrodynamics simulation of the subsequent debris motion, we track the evolution of such a system until ≃ 80% of the stellar mass bound to the black hole has settled into an accretion flow. Shocks near the stellar pericenter and also near the apocenter of the most tightly bound debris dissipate orbital energy, but only enough to make its characteristic radius comparable to the semimajor axis of the most bound material, not the tidal radius as previously envisioned. The outer shocks are caused by post-Newtonian relativistic effects, both on the stellar orbit during its disruption and on the tidal forces. Accumulation of mass into the accretion flow is both non-monotonic and slow, requiring several to 10 times the orbital period of the most tightly bound tidal streams, while the inflow time for most of the mass may be comparable to or longer than the mass accumulation time. Deflection by shocks does, however, cause some mass to lose both angular momentum and energy, permitting it to move inward even before most of the mass is accumulated into the accretion flow. Although the accretion rate still rises sharply and then decays roughly as a power law, its maximum is ≃ 0.1× the previous expectation, and the timescale of the peak is ≃ 5× longer than previously predicted. The geometric mean of the black hole mass and stellar mass inferred from a measured event timescale is therefore ≃ 0.2× the value given by classical theory.
Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-11-01
This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.
Generalized fictitious methods for fluid-structure interactions: Analysis and simulations
NASA Astrophysics Data System (ADS)
Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em
2013-07-01
We present a new fictitious pressure method for fluid-structure interaction (FSI) problems in incompressible flow by generalizing the fictitious mass and damping methods we published previously in [1]. The fictitious pressure method involves modification of the fluid solver whereas the fictitious mass and damping methods modify the structure solver. We analyze all fictitious methods for simplified problems and obtain explicit expressions for the optimal reduction factor (convergence rate index) at the FSI interface [2]. This analysis also demonstrates an apparent similarity of fictitious methods to the FSI approach based on Robin boundary conditions, which have been found to be very effective in FSI problems. We implement all methods, including the semi-implicit Robin based coupling method, in the context of spectral element discretization, which is more sensitive to temporal instabilities than low-order methods. However, the methods we present here are simple and general, and hence applicable to FSI based on any other spatial discretization. In numerical tests, we verify the selection of optimal values for the fictitious parameters for simplified problems and for vortex-induced vibrations (VIV) even at zero mass ratio ("for-ever-resonance"). We also develop an empirical a posteriori analysis for complex geometries and apply it to 3D patient-specific flexible brain arteries with aneurysms for very large deformations. We demonstrate that the fictitious pressure method enhances stability and convergence, and is comparable or better in most cases to the Robin approach or the other fictitious methods.
NASA Technical Reports Server (NTRS)
Adams, M. L.; Padovan, J.; Fertis, D. G.
1980-01-01
A general purpose squeeze-film damper interactive force element was developed, coded into a software package (module) and debugged. This software package was applied to nonliner dynamic analyses of some simple rotor systems. Results for pressure distributions show that the long bearing (end sealed) is a stronger bearing as compared to the short bearing as expected. Results of the nonlinear dynamic analysis, using a four degree of freedom simulation model, showed that the orbit of the rotating shaft increases nonlinearity to fill the bearing clearance as the unbalanced weight increases.
The r-process in black hole-neutron star mergers based on a fully general-relativistic simulation
NASA Astrophysics Data System (ADS)
Nishimura, N.; Wanajo, S.; Sekiguchi, Y.; Kiuchi, K.; Kyutoku, K.; Shibata, M.
2016-01-01
We investigate the black hole-neutron star binary merger in the contest of the r-process nucleosynthesis. Employing a hydrodynamical model simulated in the framework of full general relativity, we perform nuclear reaction network calculations. The extremely neutron-rich matter with the total mass 0.01 M⊙ is ejected, in which a strong r-process with fission cycling proceeds due to the high neutron number density. We discuss relevant astrophysical issues such as the origin of r-process elements as well as the r-process powered electromagnetic transients.
NASA Astrophysics Data System (ADS)
Haberle, R. M.; Pollack, J. B.; Barnes, J. R.; Zurek, R. W.; Leovy, C. B.; Murphy, J. R.; Lee, H.; Schaeffer, J.
1993-02-01
The characteristics of the zonal-mean circulation and how it responds to seasonal variations and dust loading are described. This circulation is the main momentum-containing component of the general circulation, and it plays a dominant role in the budgets of heat and momentum. It is shown that in many ways the zonal-mean circulation on Mars, at least as simulated by the model, is similar to that on earth, having Hadley and Ferrel cells and high-altitude jet streams. However, the Martian systems tend to be deeper, more intense, and much more variable with season. Furthermore, the radiative effects of suspended dust particles, even in small amounts, have a major influence on the general circulation.
NASA Astrophysics Data System (ADS)
Wiß, Felix; Stacke, Tobias; Hagemann, Stefan
2014-05-01
Soil moisture and its memory can have a strong impact on near surface temperature and precipitation and have the potential to promote severe heat waves, dry spells and floods. To analyze how soil moisture is simulated in recent general circulation models (GCMs), soil moisture data from a 23 model ensemble of Atmospheric Model Intercomparison Project (AMIP) type simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) are examined for the period 1979 to 2008 with regard to parameterization and statistical characteristics. With respect to soil moisture processes, the models vary in their maximum soil and root depth, the number of soil layers, the water-holding capacity, and the ability to simulate freezing which all together leads to very different soil moisture characteristics. Differences in the water-holding capacity are resulting in deviations in the global median soil moisture of more than one order of magnitude between the models. In contrast, the variance shows similar absolute values when comparing the models to each other. Thus, the input and output rates by precipitation and evapotranspiration, which are computed by the atmospheric component of the models, have to be in the same range. Most models simulate great variances in the monsoon areas of the tropics and north western U.S., intermediate variances in Europe and eastern U.S., and low variances in the Sahara, continental Asia, and central and western Australia. In general, the variance decreases with latitude over the high northern latitudes. As soil moisture trends in the models were found to be negligible, the soil moisture anomalies were calculated by subtracting the 30 year monthly climatology from the data. The length of the memory is determined from the soil moisture anomalies by calculating the first insignificant autocorrelation for ascending monthly lags (insignificant autocorrelation folding time). The models show a great spread of autocorrelation length from a few months in
Theoretical Study and Computer Simulation of Generalized Solid-on-solid Models
NASA Astrophysics Data System (ADS)
Fu, Qi
The subject of this thesis is investigation of the morphology of a crystal surface by means of statistical mechanics and Monte Carlo simulations. We employ solid-on-solid models, modified to include the effects of corner and edge energies of faceted surfaces. We also account for surface configurational entropy associated with various surface configurations (colonies of facets). This is an extension of the work of Herring who ignored corner and edge energies and effectively treated periodic hill-and-valley structures, which have no configurational entropy. The excess energies from the corners and edges of a surface also affect the equilibrium shape of very small crystals. These and other related effects are studied on solid-on-solid models for nearest-neighbor forces with central symmetry and additive bond energies. We obtain theoretical formulae for configurational entropy and theoretical distributions of the heights and lengths of facets on one-dimensional crystal surfaces (two-dimensional crystals). These results are tested by comparison with simulation data and good agreement results. A modified solid-on-solid model with nearest neighbor energy proportional to the nearest neighbor height difference raised to a power p is used to account for effects of corner and edge energies for two-dimensional surfaces (three-dimensional crystals). On an initially flat (100) surface, a slight change of p-value has a significant effect on surface morphology. Especially for p = 0.9, which corresponds to positive corner energies, a "macroscopic smoothing" transition from a faceted surface at low temperatures to a non-faceted surface at high temperatures is observed. This transition is only evident for surfaces that are initially tilted with respect to a close-packed surface. We also develop a symmetric solid-on-solid model that preserves crystal symmetry. For this symmetric model, the "macroscopic smoothing" transition for p = 0.9 is still observed on (111) and (112) surfaces
3D Simulations of the Early Mars Climate with a General Circulation Model
NASA Technical Reports Server (NTRS)
Forget, F.; Haberle, R. M.; Montmessin, F.; Cha, S.; Marcq, E.; Schaeffer, J.; Wanherdrick, Y.
2003-01-01
The environmental conditions that existed on Mars during the Noachian period are subject to debate in the community. In any case, there are compelling evidence that these conditions were different than what they became later in the amazonian and possibly the Hesperian periods. Indeed, most of the old cratered terrains are disected by valley networks (thought to have been carved by flowing liquid water), whereas younger surface are almost devoid of such valleys. In addition, there are evidence that the erosion rate was much higher during the early noachian than later. Flowing water is surprising on early Mars because the solar luminosity was significantly lower than today. Even with the thick atmosphere (up to several bars).To improve our understanding of the early Mars Climate, we have developed a 3D general circulation model similar to the one used on current Earth or Mars to study the details of the climate today. Our first objective is to answer the following questions : how is the Martian climate modified if 1) the surface pressure is increased up to several bars (our baseline: 2 bars) and 2) if the sun luminosity is decreased by 25 account the heat possibly released by impacts during short periods, although it may have played a role .For this purpose, we have coupled the Martian General Circulation model developed at LMD with a sophisticated correlated k distribution model developped at NASA Ames Research Center. It is a narrow band model which computes the radiative transfer at both solar and thermal wavelengths (from 0.3 to 250 microns).
Chen, Yunjie; Roux, Benoît
2015-01-14
A family of hybrid simulation methods that combines the advantages of Monte Carlo (MC) with the strengths of classical molecular dynamics (MD) consists in carrying out short non-equilibrium MD (neMD) trajectories to generate new configurations that are subsequently accepted or rejected via an MC process. In the simplest case where a deterministic dynamic propagator is used to generate the neMD trajectories, the familiar Metropolis acceptance criterion based on the change in the total energy ΔE, min[1, exp( − βΔE)], guarantees that the hybrid algorithm will yield the equilibrium Boltzmann distribution. However, the functional form of the acceptance probability is more complex when the non-equilibrium switching process is generated via a non-deterministic stochastic dissipative propagator coupled to a heat bath. Here, we clarify the conditions under which the Metropolis criterion remains valid to rigorously yield a proper equilibrium Boltzmann distribution within hybrid neMD-MC algorithm.
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
Simulating water with rigid non-polarizable models: a general perspective.
Vega, Carlos; Abascal, Jose L F
2011-11-28
Over the last forty years many computer simulations of water have been performed using rigid non-polarizable models. Since these models describe water interactions in an approximate way it is evident that they cannot reproduce all of the properties of water. By now many properties for these kinds of models have been determined and it seems useful to compile some of these results and provide a critical view of the successes and failures. In this paper a test is proposed in which 17 properties of water, from the vapour and liquid to the solid phases, are taken into account to evaluate the performance of a water model. A certain number of points between zero (bad agreement) and ten (good agreement) are given for the predictions of each model and property. We applied the test to five rigid non-polarizable models, TIP3P, TIP5P, TIP4P, SPC/E and TIP4P/2005, obtaining an average score of 2.7, 3.7, 4.7, 5.1, and 7.2 respectively. Thus although no model reproduces all properties, some models perform better than others. It is clear that there are limitations for rigid non-polarizable models. Neglecting polarizability prevents an accurate description of virial coefficients, vapour pressures, critical pressure and dielectric constant. Neglecting nuclear quantum effects prevents an accurate description of the structure, the properties of water below 120 K and the heat capacity. It is likely that for rigid non-polarizable models it may not be possible to increase the score in the test proposed here beyond 7.6. To get closer to experiment, incorporating polarization and nuclear quantum effects is absolutely required even though a substantial increase in computer time should be expected. The test proposed here, being quantitative and selecting properties from all phases of water can be useful in the future to identify progress in the modelling of water.
Multiyear Simulations of the Martian Water Cycle with the Ames General Circulation Model
NASA Technical Reports Server (NTRS)
Haberle, R. M.; Schaeffer, J. R.; Nelli, S. M.; Murphy, J. R.
2003-01-01
Mars atmosphere is carbon dioxide dominated with non-negligible amounts of water vapor and suspended dust particles. The atmospheric dust plays an important role in the heating and cooling of the planet through absorption and emission of radiation. Small dust particles can potentially be carried to great altitudes and affect the temperatures there. Water vapor condensing onto the dust grains can affect the radiative properties of both, as well as their vertical extent. The condensation of water onto a dust grain will change the grain s fall speed and diminish the possibility of dust obtaining high altitudes. In this capacity, water becomes a controlling agent with regard to the vertical distribution of dust. Similarly, the atmosphere s water vapor holding capacity is affected by the amount of dust in the atmosphere. Dust is an excellent green house catalyst; it raises the temperature of the atmosphere, and thus, its water vapor holding capacity. There is, therefore, a potentially significant interplay between the Martian dust and water cycles. Previous research done using global, 3-D computer modeling to better understand the Martian atmosphere treat the dust and the water cycles as two separate and independent processes. The existing Ames numerical model will be employed to simulate the relationship between the Martian dust and water cycles by actually coupling the two cycles. Water will condense onto the dust, allowing the particle's radiative characteristics, fall speeds, and as a result, their vertical distribution to change. Data obtained from the Viking, Mars Pathfinder, and especially the Mars Global Surveyor missions will be used to determine the accuracy of the model results.
A NURBS-based generalized finite element scheme for 3D simulation of heterogeneous materials
NASA Astrophysics Data System (ADS)
Safdari, Masoud; Najafi, Ahmad R.; Sottos, Nancy R.; Geubelle, Philippe H.
2016-08-01
A 3D NURBS-based interface-enriched generalized finite element method (NIGFEM) is introduced to solve problems with complex discontinuous gradient fields observed in the analysis of heterogeneous materials. The method utilizes simple structured meshes of hexahedral elements that do not necessarily conform to the material interfaces in heterogeneous materials. By avoiding the creation of conforming meshes used in conventional FEM, the NIGFEM leads to significant simplification of the mesh generation process. To achieve an accurate solution in elements that are crossed by material interfaces, the NIGFEM utilizes Non-Uniform Rational B-Splines (NURBS) to enrich the solution field locally. The accuracy and convergence of the NIGFEM are tested by solving a benchmark problem. We observe that the NIGFEM preserves an optimal rate of convergence, and provides additional advantages including the accurate capture of the solution fields in the vicinity of material interfaces and the built-in capability for hierarchical mesh refinement. Finally, the use of the NIGFEM in the computational analysis of heterogeneous materials is discussed.
Aref's chaotic orbits tracked by a general ellipsoid using 3D numerical simulations
NASA Astrophysics Data System (ADS)
Shui, Pei; Popinet, Stéphane; Govindarajan, Rama; Valluri, Prashant
2015-11-01
The motion of an ellipsoidal solid in an ideal fluid has been shown to be chaotic (Aref, 1993) under the limit of non-integrability of Kirchhoff's equations (Kozlov & Oniscenko, 1982). On the other hand, the particle could stop moving when the damping viscous force is strong enough. We present numerical evidence using our in-house immersed solid solver for 3D chaotic motion of a general ellipsoidal solid and suggest criteria for triggering such motion. Our immersed solid solver functions under the framework of the Gerris flow package of Popinet et al. (2003). This solver, the Gerris Immersed Solid Solver (GISS), resolves 6 degree-of-freedom motion of immersed solids with arbitrary geometry and number. We validate our results against the solution of Kirchhoff's equations. The study also shows that the translational/ rotational energy ratio plays the key role on the motion pattern, while the particle geometry and density ratio between the solid and fluid also have some influence on the chaotic behaviour. Along with several other benchmark cases for viscous flows, we propose prediction of chaotic Aref's orbits as a key benchmark test case for immersed boundary/solid solvers.
Simulating the universe(s) II: phenomenology of cosmic bubble collisions in full general relativity
Wainwright, Carroll L.; Aguirre, Anthony; Johnson, Matthew C.; Peiris, Hiranya V. E-mail: mjohnson@perimeterinstitute.ca E-mail: h.peiris@ucl.ac.uk
2014-10-01
Observing the relics of collisions between bubble universes would provide direct evidence for the existence of an eternally inflating Multiverse; the non-observation of such events can also provide important constraints on inflationary physics. Realizing these prospects requires quantitative predictions for observables from the properties of the possible scalar field Lagrangians underlying eternal inflation. Building on previous work, we establish this connection in detail. We perform a fully relativistic numerical study of the phenomenology of bubble collisions in models with a single scalar field, computing the comoving curvature perturbation produced in a wide variety of models. We also construct a set of analytic predictions, allowing us to identify the phenomenologically relevant properties of the scalar field Lagrangian. The agreement between the analytic predictions and numerics in the relevant regions is excellent, and allows us to generalize our results beyond the models we adopt for the numerical studies. Specifically, the signature is completely determined by the spatial profile of the colliding bubble just before the collision, and the de Sitter invariant distance between the bubble centers. The analytic and numerical results support a power-law fit with an index 1< κ ∼< 2. For collisions between identical bubbles, we establish a lower-bound on the observed amplitude of collisions that is set by the present energy density in curvature.
Simulating extreme-mass-ratio systems in full general relativity: tidal disruption events
NASA Astrophysics Data System (ADS)
East, William; Pretorius, Frans
2014-03-01
Sparked by recent and anticipated observations, there is considerable interest in understanding events where a star is tidally disrupted by a massive black hole. Motivated by this and other applications, we introduce a new method for numerically evolving the full Einstein field equations in situations where the spacetime is dominated by a known background solution. The technique leverages the knowledge of the background solution to subtract off its contribution to the truncation error, thereby more efficiently achieving a desired level of accuracy. We demonstrate how the method can be applied to systems consisting of a solar-type star and a supermassive black hole with mass ratios >=106 . The self-gravity of the star is thus consistently modelled within the context of general relativity, and the star's interaction with the black hole computed with moderate computational cost, despite the over five orders of magnitude difference in gravitational potential (as defined by the ratio of mass to radius). We study the tidal deformation of the star during infall, as well as the gravitational wave emission, and discuss ongoing work to understand the importance of strong-field gravity effects on tidal disruption events.
El Nino-southern oscillation simulated in an MRI atmosphere-ocean coupled general circulation model
Nagai, T.; Tokioka, T.; Endoh, M.; Kitamura, Y. )
1992-11-01
A coupled atmosphere-ocean general circulation model (GCM) was time integrated for 30 years to study interannual variability in the tropics. The atmospheric component is a global GCM with 5 levels in the vertical and 4[degrees]latitude X 5[degrees] longitude grids in the horizontal including standard physical processes (e.g., interactive clouds). The oceanic component is a GCM for the Pacific with 19 levels in the vertical and 1[degrees]x 2.5[degrees] grids in the horizontal including seasonal varying solar radiation as forcing. The model succeeded in reproducing interannual variations that resemble the El Nino-Southern Oscillation (ENSO) with realistic seasonal variations in the atmospheric and oceanic fields. The model ENSO cycle has a time scale of approximately 5 years and the model El Nino (warm) events are locked roughly in phase to the seasonal cycle. The cold events, however, are less evident in comparison with the El Nino events. The time scale of the model ENSO cycle is determined by propagation time of signals from the central-eastern Pacific to the western Pacific and back to the eastern Pacific. Seasonal timing is also important in the ENSO time scale: wind anomalies in the central-eastern Pacific occur in summer and the atmosphere ocean coupling in the western Pacific operates efficiently in the first half of the year.
Simulating the universe(s) II: phenomenology of cosmic bubble collisions in full general relativity
NASA Astrophysics Data System (ADS)
Wainwright, Carroll L.; Johnson, Matthew C.; Aguirre, Anthony; Peiris, Hiranya V.
2014-10-01
Observing the relics of collisions between bubble universes would provide direct evidence for the existence of an eternally inflating Multiverse; the non-observation of such events can also provide important constraints on inflationary physics. Realizing these prospects requires quantitative predictions for observables from the properties of the possible scalar field Lagrangians underlying eternal inflation. Building on previous work, we establish this connection in detail. We perform a fully relativistic numerical study of the phenomenology of bubble collisions in models with a single scalar field, computing the comoving curvature perturbation produced in a wide variety of models. We also construct a set of analytic predictions, allowing us to identify the phenomenologically relevant properties of the scalar field Lagrangian. The agreement between the analytic predictions and numerics in the relevant regions is excellent, and allows us to generalize our results beyond the models we adopt for the numerical studies. Specifically, the signature is completely determined by the spatial profile of the colliding bubble just before the collision, and the de Sitter invariant distance between the bubble centers. The analytic and numerical results support a power-law fit with an index 1< κ lesssim 2. For collisions between identical bubbles, we establish a lower-bound on the observed amplitude of collisions that is set by the present energy density in curvature.
NASA Astrophysics Data System (ADS)
Roberts, Luke F.; Ott, Christian D.; Haas, Roland; O’Connor, Evan P.; Diener, Peter; Schnetter, Erik
2016-11-01
We report on a set of long-term general-relativistic three-dimensional (3D) multi-group (energy-dependent) neutrino radiation-hydrodynamics simulations of core-collapse supernovae. We employ a full 3D two-moment scheme with the local M1 closure, three neutrino species, and 12 energy groups per species. With this, we follow the post-core-bounce evolution of the core of a nonrotating 27 - {M}ȯ progenitor in full unconstrained 3D and in octant symmetry for ≳380 ms. We find the development of an asymmetric runaway explosion in our unconstrained simulation. We test the resolution dependence of our results and, in agreement with previous work, find that low resolution artificially aids explosion and leads to an earlier runaway expansion of the shock. At low resolution, the octant and full 3D dynamics are qualitatively very similar, but at high resolution, only the full 3D simulation exhibits the onset of explosion.
NASA Astrophysics Data System (ADS)
Bonan, Gordon B.
1995-02-01
CO2 uptake during plant photosynthesis and CO2 loss during plant and microbial respiration were added to a land surface process model to simulate the diurnal and annual cycles of biosphere-atmosphere CO2 exchange. The model was coupled to a modified version of the National Center for Atmospheric Research (NCAR) Community Climate Model version 2 (CCM2), and the coupled model was run for 5 years. The geographic patterns of annual net primary production are qualitatively similar to other models. When compared by vegetation type, annual production and annual microbial respiration are consistent with other models, except for needleleaf evergreen tree vegetation, where production is too high, and semidesert vegetation, where production and microbial respiration are too low. The seasonality of the net CO2 flux agrees with other models in the southern hemisphere and the tropics. The diurnal range is large for photosynthesis and lower for plant and microbial respiration, which agrees with qualitative expectations. The simulation of the central United States is poor due to temperature and precipitation biases in the coupled model. Despite these deficiencies the current approach is a promising means to include terrestrial CO2 fluxes in a climate system model that simulates atmospheric CO2 concentrations, because it alleviates important parameterization discrepancies between standard biogeochemical models and the land surface models typically used in general circulation models, and because the model resolves the diurnal range of CO2 exchange, which can be large (15 - 45 micromol CO2 sq m/s).
Wang, Hainan; Thiele, Alexander; Pilon, Laurent
2013-11-15
This paper presents a generalized modified Poisson–Nernst–Planck (MPNP) model derived from first principles based on excess chemical potential and Langmuir activity coefficient to simulate electric double-layer dynamics in asymmetric electrolytes. The model accounts simultaneously for (1) asymmetric electrolytes with (2) multiple ion species, (3) finite ion sizes, and (4) Stern and diffuse layers along with Ohmic potential drop in the electrode. It was used to simulate cyclic voltammetry (CV) measurements for binary asymmetric electrolytes. The results demonstrated that the current density increased significantly with decreasing ion diameter and/or increasing valency |z_{i}| of either ion species. By contrast, the ion diffusion coefficients affected the CV curves and capacitance only at large scan rates. Dimensional analysis was also performed, and 11 dimensionless numbers were identified to govern the CV measurements of the electric double layer in binary asymmetric electrolytes between two identical planar electrodes of finite thickness. A self-similar behavior was identified for the electric double-layer integral capacitance estimated from CV measurement simulations. Two regimes were identified by comparing the half cycle period τ_{CV} and the “RC time scale” τ_{RC} corresponding to the characteristic time of ions’ electrodiffusion. For τ_{RC} ← τ_{CV}, quasi-equilibrium conditions prevailed and the capacitance was diffusion-independent while for τ_{RC} → τ_{CV}, the capacitance was diffusion-limited. The effect of the electrode was captured by the dimensionless electrode electrical conductivity representing the ratio of characteristic times associated with charge transport in the electrolyte and that in the electrode. The model developed here will be useful for simulating and designing various practical electrochemical, colloidal, and biological systems for a wide range of applications.
NASA Astrophysics Data System (ADS)
Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.
2012-05-01
The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).
NASA Technical Reports Server (NTRS)
Clancey, William J.; Linde, Charlotte; Seah, Chin; Shafto, Michael
2013-01-01
The transition from the current air traffic system to the next generation air traffic system will require the introduction of new automated systems, including transferring some functions from air traffic controllers to on-board automation. This report describes a new design verification and validation (V&V) methodology for assessing aviation safety. The approach involves a detailed computer simulation of work practices that includes people interacting with flight-critical systems. The research is part of an effort to develop new modeling and verification methodologies that can assess the safety of flight-critical systems, system configurations, and operational concepts. The 2002 Ueberlingen mid-air collision was chosen for analysis and modeling because one of the main causes of the accident was one crew's response to a conflict between the instructions of the air traffic controller and the instructions of TCAS, an automated Traffic Alert and Collision Avoidance System on-board warning system. It thus furnishes an example of the problem of authority versus autonomy. It provides a starting point for exploring authority/autonomy conflict in the larger system of organization, tools, and practices in which the participants' moment-by-moment actions take place. We have developed a general air traffic system model (not a specific simulation of Überlingen events), called the Brahms Generalized Ueberlingen Model (Brahms-GUeM). Brahms is a multi-agent simulation system that models people, tools, facilities/vehicles, and geography to simulate the current air transportation system as a collection of distributed, interactive subsystems (e.g., airports, air-traffic control towers and personnel, aircraft, automated flight systems and air-traffic tools, instruments, crew). Brahms-GUeM can be configured in different ways, called scenarios, such that anomalous events that contributed to the Überlingen accident can be modeled as functioning according to requirements or in an
ERIC Educational Resources Information Center
Jackson, Edwin L.
The student's kit and teacher's manual provide a framework for secondary students to simulate the functionings of Georgia's General Assembly. Objectives of the simulation are to help students: (1) experience the forces and conflicts involved in lawmaking, (2) learn about the role of legislators, (3) understand and discuss issues facing citizens,…
MoSeS: Modelling and Simulation for e-Social Science.
Townend, Paul; Xu, Jie; Birkin, Mark; Turner, Andy; Wu, Belinda
2009-07-13
MoSeS (Modelling and Simulation for e-Social Science) is a research node of the National Centre for e-Social Science. MoSeS uses e-Science techniques to execute an events-driven model that simulates discrete demographic processes; this allows us to project the UK population 25 years into the future. This paper describes the architecture, simulation methodology and latest results obtained by MoSeS.
NASA Technical Reports Server (NTRS)
Majumdar, A. K.; Hedayat, A.
2015-01-01
This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects.
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak; Majumdar, Alok
2007-01-01
The present paper describes the verification and validation of a quasi one-dimensional pressure based finite volume algorithm, implemented in Generalized Fluid System Simulation Program (GFSSP), for predicting compressible flow with friction, heat transfer and area change. The numerical predictions were compared with two classical solutions of compressible flow, i.e. Fanno and Rayleigh flow. Fanno flow provides an analytical solution of compressible flow in a long slender pipe where incoming subsonic flow can be choked due to friction. On the other hand, Raleigh flow provides analytical solution of frictionless compressible flow with heat transfer where incoming subsonic flow can be choked at the outlet boundary with heat addition to the control volume. Nonuniform grid distribution improves the accuracy of numerical prediction. A benchmark numerical solution of compressible flow in a converging-diverging nozzle with friction and heat transfer has been developed to verify GFSSP's numerical predictions. The numerical predictions compare favorably in all cases.
Xu, Guang-Kui; Hu, Jinglei; Lipowsky, Reinhard; Weikl, Thomas R
2015-12-28
Adhesion processes of biological membranes that enclose cells and cellular organelles are essential for immune responses, tissue formation, and signaling. These processes depend sensitively on the binding constant K2D of the membrane-anchored receptor and ligand proteins that mediate adhesion, which is difficult to measure in the "two-dimensional" (2D) membrane environment of the proteins. An important problem therefore is to relate K2D to the binding constant K3D of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in three dimensions (3D). In this article, we present a general theory for the binding constants K2D and K3D of rather stiff proteins whose main degrees of freedom are translation and rotation, along membranes and around anchor points "in 2D," or unconstrained "in 3D." The theory generalizes previous results by describing how K2D depends both on the average separation and thermal nanoscale roughness of the apposing membranes, and on the length and anchoring flexibility of the receptors and ligands. Our theoretical results for the ratio K2D/K3D of the binding constants agree with detailed results from Monte Carlo simulations without any data fitting, which indicates that the theory captures the essential features of the "dimensionality reduction" due to membrane anchoring. In our Monte Carlo simulations, we consider a novel coarse-grained model of biomembrane adhesion in which the membranes are represented as discretized elastic surfaces, and the receptors and ligands as anchored molecules that diffuse continuously along the membranes and rotate at their anchor points.
NASA Astrophysics Data System (ADS)
Xie, Lianghai; Li, Lei; Zhang, Yiteng; Feng, Yongyong; Wang, Xinyue; Zhang, Aibing; Kong, Linggao
2015-08-01
Lunar minimagnetosphere formed by the interaction between the solar wind and a local crustal field often has a scale size comparable to the ion inertia length, in which the Hall effect is very important. In this paper, the general characteristics of lunar minimagnetosphere are investigated by three-dimensional Hall MHD simulations. It is found that the solar wind ions can penetrate across the magnetopause to reduce the density depletion and cause the merging of the shock and magnetopause, but the electrons are still blocked at the boundary. Besides, asymmetric convection occurs, resulting in the magnetic field piles up on one side while the plasma gathers on the other side. The size of the minimagnetosphere is determined by both the solar zenith angle and the magnetosonic Mach number, while the Hall effect is determined by the ratio of the pressure balance distance to the ion inertia length. When the ratio gets small, the shock may disappear. Finally, we present a global Hall MHD simulation for comparison with the observation from Chang'E-2 satellite on 11 October 2010 and confirm that Chang'E-2 flew across compression regions of two separate minimagnetospheres.
General circulation and thermal structure simulated by a Venus AGCM with a two-stream radiative code
NASA Astrophysics Data System (ADS)
Yamamoto, Masaru; Ikeda, Kohei; Takahashi, Masaaki
2016-10-01
Atmospheric general circulation model (AGCM) is expected to be a powerful tool for understanding Venus climate and atmospheric dynamics. At the present stage, however, the full-physics model is under development. Ikeda (2011) developed a two-stream radiative transfer code, which covers the solar to infrared radiative processes due to the gases and aerosol particles. The radiative code was applied to Venus AGCM (T21L52) at Atmosphere and Ocean Research Institute, Univ. Tokyo. We analyzed the results in a few Venus days simulation that was restarted after nudging zonal wind to a super-rotating state until the equilibrium. The simulated thermal structure has low-stability layer around 105 Pa at low latitudes, and the neutral stability extends from ˜105 Pa to the lower atmosphere at high latitudes. At the equatorial cloud top, the temperature lowers in the region between noon and evening terminator. For zonal and meridional winds, we can see difference between the zonal and day-side means. As was indicated in previous works, the day-side mean meridional wind speed mostly corresponds to the poleward component of the thermal tide and is much higher than the zonal mean. Toward understanding dynamical roles of waves in UV cloud tracking and brightness, we calculated the eddy heat and momentum fluxes averaged over the day-side hemisphere. The eddy heat and momentum fluxes are poleward in the poleward flank of the jet. In contrast, the fluxes are relatively weak and equatorward at low latitudes. The eddy momentum flux becomes equatorward in the dynamical situation that the simulated equatorial wind is weaker than the midlatitude jet. The sensitivity to the zonal flow used for the nudging will be also discussed in the model validation.
NASA Astrophysics Data System (ADS)
Wang, Huiqun; Toigo, Anthony D.
2016-06-01
Investigations of the variability, structure and energetics of the m = 1-3 traveling waves in the northern hemisphere of Mars are conducted with the MarsWRF general circulation model. Using a simple, annually repeatable dust scenario, the model reproduces many general characteristics of the observed traveling waves. The simulated m = 1 and m = 3 traveling waves show large differences in terms of their structures and energetics. For each representative wave mode, the geopotential signature maximizes at a higher altitude than the temperature signature, and the wave energetics suggests a mixed baroclinic-barotropic nature. There is a large contrast in wave energetics between the near-surface and higher altitudes, as well as between the lower latitudes and higher latitudes at high altitudes. Both barotropic and baroclinic conversions can act as either sources or sinks of eddy kinetic energy. Band-pass filtered transient eddies exhibit strong zonal variations in eddy kinetic energy and various energy transfer terms. Transient eddies are mainly interacting with the time mean flow. However, there appear to be non-negligible wave-wave interactions associated with wave mode transitions. These interactions include those between traveling waves and thermal tides and those among traveling waves.
NASA Technical Reports Server (NTRS)
Majumdar, A. K.
2011-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the point, drag and click method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids and 21 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 12 demonstrated example problems. This supplement gives the input and output data files for the examples.
NASA Astrophysics Data System (ADS)
Raible, Christoph C.; Baerenbold, Oliver; Gomez-Navarro, Juan Jose
2016-04-01
Over the past decades, different drought indices have been suggested in the literature. This study tackles the problem of how to characterize drought by defining a general framework and proposing a generalized family of drought indices that is flexible regarding the use of different water balance models. The sensitivity of various indices and its skill to represent drought conditions is evaluated using a regional model simulation in Europe spanning the last two millennia as test bed. The framework combines an exponentially damped memory with a normalization method based on quantile mapping. Both approaches are more robust and physically meaningful compared to the existing methods used to define drought indices. Still, framework is flexible with respect to the water balance, enabling users to adapt the index formulation to the data availability of different locations. Based on the framework, indices with different complex water balances are compared with each other. The comparison shows that a drought index considering only precipitation in the water balance is sufficient for Western to Central Europe. However, in the Mediterranean temperature effects via evapotranspiration need to be considered in order to produce meaningful indices representative of actual water deficit. Similarly, our results indicate that in north-eastern Europe and Scandinavia, snow and runoff effects needs to be considered in the index definition to obtain accurate results.
Wilby, Robert L.; Dettinger, Michael D.
2000-01-01
Simulations of future climate using general circulation models (GCMs) suggest that rising concentrations of greenhouse gases may have significant consequences for the global climate. Of less certainty is the extent to which regional scale (i.e., sub-GCM grid) environmental processes will be affected. In this chapter, a range of downscaling techniques are critiqued. Then a relatively simple (yet robust) statistical downscaling technique and its use in the modelling of future runoff scenarios for three river basins in the Sierra Nevada, California, is described. This region was selected because GCM experiments driven by combined greenhouse-gas and sulphate-aerosol forcings consistently show major changes in the hydro-climate of the southwest United States by the end of the 21st century. The regression-based downscaling method was used to simulate daily rainfall and temperature series for streamflow modelling in three Californian river basins under current-and future-climate conditions. The downscaling involved just three predictor variables (specific humidity, zonal velocity component of airflow, and 500 hPa geopotential heights) supplied by the U.K. Meteorological Office couple ocean-atmosphere model (HadCM2) for the grid point nearest the target basins. When evaluated using independent data, the model showed reasonable skill at reproducing observed area-average precipitation, temperature, and concomitant streamflow variations. Overall, the downscaled data resulted in slight underestimates of mean annual streamflow due to underestimates of precipitation in spring and positive temperature biases in winter. Differences in the skill of simulated streamflows amongst the three basins were attributed to the smoothing effects of snowpack on streamflow responses to climate forcing. The Merced and American River basins drain the western, windward slope of the Sierra Nevada and are snowmelt dominated, whereas the Carson River drains the eastern, leeward slope and is a mix of
NASA Astrophysics Data System (ADS)
Riasi, M. S.; Huang, G.; Montemagno, C.; Yeghiazarian, L.
2014-12-01
Micro-scale modeling of multiphase flow in porous media is critical to characterize porous materials. Several modeling techniques have been implemented to date, but none can be used as a general strategy for all porous media applications due to challenges presented by non-smooth high-curvature and deformable solid surfaces, and by a wide range of pore sizes and porosities. Finite approaches like the finite volume method require a high quality, problem-dependent mesh, while particle-based approaches like the lattice Boltzmann require too many particles to achieve a stable meaningful solution. Both come at a large computational cost. Other methods such as pore network modeling (PNM) have been developed to accelerate the solution process by simplifying the solution domain, but so far a unique and straightforward methodology to implement PNM is lacking. Pore topology method (PTM) is a new topologically consistent approach developed to simulate multiphase flow in porous media. The core of PTM is to reduce the complexity of the 3-D void space geometry by working with its medial surface as the solution domain. Medial surface is capable of capturing all the corners and surface curvatures in a porous structure, and therefore provides a topologically consistent representative geometry for porous structure. Despite the simplicity and low computational cost, PTM provides a fast and straightforward approach for micro-scale modeling of fluid flow in all types of porous media irrespective of their porosity and pore size distribution. In our previous work, we developed a non-iterative fast medial surface finder algorithm to determine a voxel-wide medial surface of the void space of porous media as well as a set of simple rules to determine the capillary pressure-saturation curves for a porous system assuming quasi-static two-phase flow with a planar w-nw interface. Our simulation results for a highly porous fibrous material and polygonal capillary tubes were in excellent agreement
Object orientated simulation on transputer arrays using time warp
NASA Astrophysics Data System (ADS)
Simpson, P.
1989-12-01
The successful application of transputers to distributed event driven heterogeneous simulation using the time warp methodology is demonstrated with transputers and occam providing a natural vehicle for this class of simulation. The simulation technique basically comprises a number of communicating simulation object processes, with appropriate action being taken to ensure the correct chronological sequence of processed simulation events. Time warp is particularly attractive, since it permits all parts of a distributed processor network to operate in parallel (although some of the computation may later be undone). The need for hardware control of memory management has not been identified, although the requirement for a deadlock free, random point to point communications strategy has.
NASA Astrophysics Data System (ADS)
Takahashi, Hiroyuki R.; Ohsuga, Ken; Kawashima, Tomohisa; Sekiguchi, Yuichiro
2016-07-01
Using three-dimensional general relativistic radiation-magnetohydrodynamics simulations of accretion flows around stellar mass black holes, we report that the relatively cold disk (≳ {10}7 {{K}}) is truncated near the black hole. Hot and less dense regions, of which the gas temperature is ≳ {10}9 {{K}} and more than 10 times higher than the radiation temperature (overheated regions), appear within the truncation radius. The overheated regions also appear above as well as below the disk, sandwiching the cold disk, leading to the effective Compton upscattering. The truncation radius is ˜ 30{r}{{g}} for \\dot{M}˜ {L}{{Edd}}/{c}2, where {r}{{g}},\\dot{M},{L}{Edd},c are the gravitational radius, mass accretion rate, Eddington luminosity, and light speed, respectively. Our results are consistent with observations of a very high state, whereby the truncated disk is thought to be embedded in the hot rarefied regions. The truncation radius shifts inward to ˜ 10{r}{{g}} with increasing mass accretion rate \\dot{M}˜ 100{L}{{Edd}}/{c}2, which is very close to an innermost stable circular orbit. This model corresponds to the slim disk state observed in ultraluminous X-ray sources. Although the overheated regions shrink if the Compton cooling effectively reduces the gas temperature, the sandwich structure does not disappear at the range of \\dot{M}≲ 100{L}{{Edd}}/{c}2. Our simulations also reveal that the gas temperature in the overheated regions depends on black hole spin, which would be due to efficient energy transport from black hole to disks through the Poynting flux, resulting in gas heating.
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-01-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present
Gadermann, Anne M.; Gilman, Stephen E.; McLaughlin, Katie A.; Nock, Matthew K.; Petukhova, Maria; Sampson, Nancy A.; Kessler, Ronald C.
2014-01-01
Limited data are available on lifetime prevalence and age-of-onset distributions of psychological disorders and suicidal behaviors among Army personnel. We used simulation methods to approximate such estimates based on analysis of data from a U.S. national general population survey with the socio-demographic profile of U.S. Army personnel. Estimated lifetime prevalence of any DSM-IV anxiety, mood, behavior, or substance disorder in this sample was 53.1 percent (17.7 percent for mood disorders, 27.2 percent for anxiety disorders, 22.7 percent for behavior disorders, and 14.4 percent for substance disorders). The vast majority of cases had onsets prior to the expected age-of-enlistment if they were in the Army (91.6 percent). Lifetime prevalence was 14.2 percent for suicidal ideation, 5.4 percent for suicide plans, and 4.5 percent for suicide attempts. The proportion of estimated pre-enlistment onsets was between 68.4 percent (suicide plans) and 82.4 percent (suicidal ideation). Externalizing disorders with onsets prior to expected age-of-enlistment and internalizing disorders with onsets after expected age-of-enlistment significantly predicted post-enlistment suicide attempts, with population attributable risk proportions of 41.8 percent and 38.8 percent, respectively. Implications of these findings are discussed for interventions designed to screen, detect, and treat psychological disorders and suicidality in the Army. PMID:23025127
Sato, Hitoshi; Miyashita, Tetsuya; Kawakami, Hiromasa; Nagamine, Yusuke; Takaki, Shunsuke; Goto, Takahisa
2016-01-01
The aim of this study was to reveal the effect of anesthesiologist's mental workload during induction of general anesthesia. Twenty-two participants were categorized into anesthesiology residents (RA group, n = 13) and board certified anesthesiologists (CA group, n = 9). Subjects participated in three simulated scenarios (scenario A: baseline, scenario B: simple addition tasks, and scenario C: combination of simple addition tasks and treatment of unexpected arrhythmia). We used simple two-digit integer additions every 5 seconds as a secondary task. Four kinds of key actions were also evaluated in each scenario. In scenario C, the correct answer rate was significantly higher in the CA versus the RA group (RA: 0.370 ± 0.050 versus CA: 0.736 ± 0.051, p < 0.01, 95% CI −0.518 to −0.215) as was the score of key actions (RA: 2.7 ± 1.3 versus CA: 4.0 ± 0.00, p = 0.005). In a serious clinical situation, anesthesiologists might not be able to adequately perform both the primary and secondary tasks. This tendency is more apparent in young anesthesiologists. PMID:27148548
NASA Astrophysics Data System (ADS)
Westerhof, E.; de Blank, H. J.; Pratt, J.
2016-03-01
Two dimensional reduced MHD simulations of neoclassical tearing mode growth and suppression by ECCD are performed. The perturbation of the bootstrap current density and the EC drive current density perturbation are assumed to be functions of the perturbed flux surfaces. In the case of ECCD, this implies that the applied power is flux surface averaged to obtain the EC driven current density distribution. The results are consistent with predictions from the generalized Rutherford equation using common expressions for Δ \\text{bs}\\prime and Δ \\text{ECCD}\\prime . These expressions are commonly perceived to describe only the effect on the tearing mode growth of the helical component of the respective current perturbation acting through the modification of Ohm’s law. Our results show that they describe in addition the effect of the poloidally averaged current density perturbation which acts through modification of the tearing mode stability index. Except for modulated ECCD, the largest contribution to the mode growth comes from this poloidally averaged current density perturbation.
Benighaus, Tobias; Thiel, Walter
2008-10-14
We report the implementation of the generalized solvent boundary potential (GSBP) [ Im , W. , Bernèche , S. , and Roux , B. J. Chem. Phys. 2001, 114, 2924 ] in the framework of semiempirical hybrid quantum mechanical/molecular mechanical (QM/MM) methods. Application of the GSBP is connected with a significant overhead that is dominated by numerical solutions of the Poisson-Boltzmann equation for continuous charge distributions. Three approaches are presented that accelerate computation of the values at the boundary of the simulation box and in the interior of the macromolecule and solvent. It is shown that these methods reduce the computational overhead of the GSBP significantly with only minimal loss of accuracy. The accuracy of the GSBP to represent long-range electrostatic interactions is assessed for an extensive set of its inherent parameters, and a set of optimal parameters is defined. On this basis, the overhead and the savings of the GSBP are quantified for model systems of different sizes in the range of 7000 to 40 000 atoms. We find that the savings compensate for the overhead in systems larger than 12 500 atoms. Beyond this system size, the GSBP reduces the computational cost significantly, by 70% and more for large systems (>25 000 atoms). PMID:26620166
Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David
2013-09-01
The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods. PMID:24103929
Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David
2013-09-01
The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.
Yu, Wenbo; He, Xibing; Vanommeslaeghe, Kenno; MacKerell, Alexander D.
2012-01-01
Presented is an extension of the CHARMM General force field (CGenFF) to enable the modeling of sulfonyl-containing compounds. Model compounds containing chemical moieties such as sulfone, sulfonamide, sulfonate and sulfamate were used as the basis for the parameter optimization. Targeting high-level quantum mechanical and experimental crystal data, the new parameters were optimized in a hierarchical fashion designed to maintain compatibility with the remainder of the CHARMM additive force field. The optimized parameters satisfactorily reproduced equilibrium geometries, vibrational frequencies, interactions with water, gas phase dipole moments and dihedral potential energy scans. Validation involved both crystalline and liquid phase calculations showing the newly developed parameters to satisfactorily reproduce experimental unit cell geometries, crystal intramolecular geometries and pure solvent densities. The force field was subsequently applied to study conformational preference of a sulfonamide based peptide system. Good agreement with experimental IR/NMR data further validated the newly developed CGenFF parameters as a tool to investigate the dynamic behavior of sulfonyl groups in a biological environment. CGenFF now covers sulfonyl group containing moieties allowing for modeling and simulation of sulfonyl-containing compounds in the context of biomolecular systems including compounds of medicinal interest. PMID:22821581
Hu, Kan-Nian; Qiang, Wei; Tycko, Robert
2011-01-01
We describe a general computational approach to site-specific resonance assignments in multidimensional NMR studies of uniformly 15N,13C-labeled biopolymers, based on a simple Monte Carlo/simulated annealing (MCSA) algorithm contained in the program MCASSIGN2. Input to MCASSIGN2 includes lists of multidimensional signals in the NMR spectra with their possible residue-type assignments (which need not be unique), the biopolymer sequence, and a table that describes the connections that relate one signal list to another. As output, MCASSIGN2 produces a high-scoring sequential assignment of the multidimensional signals, using a score function that rewards good connections (i.e., agreement between relevant sets of chemical shifts in different signal lists) and penalizes bad connections, unassigned signals, and assignment gaps. Examination of a set of high-scoring assignments from a large number of independent runs allows one to determine whether a unique assignment exists for the entire sequence or parts thereof. We demonstrate the MCSA algorithm using two-dimensional (2D) and three-dimensional (3D) solid state NMR spectra of several model protein samples (α-spectrin SH3 domain and protein G/B1 microcrystals, HET-s218–289 fibrils), obtained with magic-angle spinning and standard polarization transfer techniques. The MCSA algorithm and MCASSIGN2 program can accommodate arbitrary combinations of NMR spectra with arbitrary dimensionality, and can therefore be applied in many areas of solid state and solution NMR. PMID:21710190
NASA Technical Reports Server (NTRS)
Shen, B.-W.; Atlas, R.; Chern, J.-D.; Reale, O.; Lin, S.-J.; Lee, T.; Chang, J.
2005-01-01
The NASA Columbia supercomputer was ranked second on the TOP500 List in November, 2004. Such a quantum jump in computing power provides unprecedented opportunities to conduct ultra-high resolution simulations with the finite-volume General Circulation Model (fvGCM). During 2004, the model was run in realtime experimentally at 0.25 degree resolution producing remarkable hurricane forecasts [Atlas et al., 2005]. In 2005, the horizontal resolution was further doubled, which makes the fvGCM comparable to the first mesoscale resolving General Circulation Model at the Earth Simulator Center [Ohfuchi et al., 2004]. Nine 5-day 0.125 degree simulations of three hurricanes in 2004 are presented first for model validation. Then it is shown how the model can simulate the formation of the Catalina eddies and Hawaiian lee vortices, which are generated by the interaction of the synoptic-scale flow with surface forcing, and have never been reproduced in a GCM before.)
Wheeler, Cosette M.; Paavonen, Jorma; Castellsagué, Xavier; Garland, Suzanne M.; Skinner, S. Rachel; Naud, Paulo; Salmerón, Jorge; Chow, Song-Nan; Kitchener, Henry C.; Teixeira, Julio C.; Jaisamrarn, Unnop; Limson, Genara; Szarewski, Anne; Romanowski, Barbara; Aoki, Fred Y.; Schwarz, Tino F.; Poppe, Willy A. J.; Bosch, F. Xavier; Mindel, Adrian; de Sutter, Philippe; Hardt, Karin; Zahaf, Toufik; Descamps, Dominique; Struyf, Frank; Lehtinen, Matti; Dubin, Gary
2015-01-01
We report final event-driven analysis data on the immunogenicity and efficacy of the human papillomavirus 16 and 18 ((HPV-16/18) AS04-adjuvanted vaccine in young women aged 15 to 25 years from the PApilloma TRIal against Cancer In young Adults (PATRICIA). The total vaccinated cohort (TVC) included all randomized participants who received at least one vaccine dose (vaccine, n = 9,319; control, n = 9,325) at months 0, 1, and/or 6. The TVC-naive (vaccine, n = 5,822; control, n = 5,819) had no evidence of high-risk HPV infection at baseline, approximating adolescent girls targeted by most HPV vaccination programs. Mean follow-up was approximately 39 months after the first vaccine dose in each cohort. At baseline, 26% of women in the TVC had evidence of past and/or current HPV-16/18 infection. HPV-16 and HPV-18 antibody titers postvaccination tended to be higher among 15- to 17-year-olds than among 18- to 25-year-olds. In the TVC, vaccine efficacy (VE) against cervical intraepithelial neoplasia grade 1 or greater (CIN1+), CIN2+, and CIN3+ associated with HPV-16/18 was 55.5% (96.1% confidence interval [CI], 43.2, 65.3), 52.8% (37.5, 64.7), and 33.6% (−1.1, 56.9). VE against CIN1+, CIN2+, and CIN3+ irrespective of HPV DNA was 21.7% (10.7, 31.4), 30.4% (16.4, 42.1), and 33.4% (9.1, 51.5) and was consistently significant only in 15- to 17-year-old women (27.4% [10.8, 40.9], 41.8% [22.3, 56.7], and 55.8% [19.2, 76.9]). In the TVC-naive, VE against CIN1+, CIN2+, and CIN3+ associated with HPV-16/18 was 96.5% (89.0, 99.4), 98.4% (90.4, 100), and 100% (64.7, 100), and irrespective of HPV DNA it was 50.1% (35.9, 61.4), 70.2% (54.7, 80.9), and 87.0% (54.9, 97.7). VE against 12-month persistent infection with HPV-16/18 was 89.9% (84.0, 94.0), and that against HPV-31/33/45/51 was 49.0% (34.7, 60.3). In conclusion, vaccinating adolescents before sexual debut has a substantial impact on the overall incidence of high-grade cervical abnormalities, and catch-up vaccination up to 18
Apter, Dan; Wheeler, Cosette M; Paavonen, Jorma; Castellsagué, Xavier; Garland, Suzanne M; Skinner, S Rachel; Naud, Paulo; Salmerón, Jorge; Chow, Song-Nan; Kitchener, Henry C; Teixeira, Julio C; Jaisamrarn, Unnop; Limson, Genara; Szarewski, Anne; Romanowski, Barbara; Aoki, Fred Y; Schwarz, Tino F; Poppe, Willy A J; Bosch, F Xavier; Mindel, Adrian; de Sutter, Philippe; Hardt, Karin; Zahaf, Toufik; Descamps, Dominique; Struyf, Frank; Lehtinen, Matti; Dubin, Gary
2015-04-01
We report final event-driven analysis data on the immunogenicity and efficacy of the human papillomavirus 16 and 18 ((HPV-16/18) AS04-adjuvanted vaccine in young women aged 15 to 25 years from the PApilloma TRIal against Cancer In young Adults (PATRICIA). The total vaccinated cohort (TVC) included all randomized participants who received at least one vaccine dose (vaccine, n = 9,319; control, n = 9,325) at months 0, 1, and/or 6. The TVC-naive (vaccine, n = 5,822; control, n = 5,819) had no evidence of high-risk HPV infection at baseline, approximating adolescent girls targeted by most HPV vaccination programs. Mean follow-up was approximately 39 months after the first vaccine dose in each cohort. At baseline, 26% of women in the TVC had evidence of past and/or current HPV-16/18 infection. HPV-16 and HPV-18 antibody titers postvaccination tended to be higher among 15- to 17-year-olds than among 18- to 25-year-olds. In the TVC, vaccine efficacy (VE) against cervical intraepithelial neoplasia grade 1 or greater (CIN1+), CIN2+, and CIN3+ associated with HPV-16/18 was 55.5% (96.1% confidence interval [CI], 43.2, 65.3), 52.8% (37.5, 64.7), and 33.6% (-1.1, 56.9). VE against CIN1+, CIN2+, and CIN3+ irrespective of HPV DNA was 21.7% (10.7, 31.4), 30.4% (16.4, 42.1), and 33.4% (9.1, 51.5) and was consistently significant only in 15- to 17-year-old women (27.4% [10.8, 40.9], 41.8% [22.3, 56.7], and 55.8% [19.2, 76.9]). In the TVC-naive, VE against CIN1+, CIN2+, and CIN3+ associated with HPV-16/18 was 96.5% (89.0, 99.4), 98.4% (90.4, 100), and 100% (64.7, 100), and irrespective of HPV DNA it was 50.1% (35.9, 61.4), 70.2% (54.7, 80.9), and 87.0% (54.9, 97.7). VE against 12-month persistent infection with HPV-16/18 was 89.9% (84.0, 94.0), and that against HPV-31/33/45/51 was 49.0% (34.7, 60.3). In conclusion, vaccinating adolescents before sexual debut has a substantial impact on the overall incidence of high-grade cervical abnormalities, and catch-up vaccination up to 18 years
NASA Technical Reports Server (NTRS)
Riley, D. R.
1985-01-01
A six-degree-of-freedom nonlinear simulation was developed for a two-place, single-engine, low-wing general aviation airplane for the stall and initial departure regions of flight. Two configurations, one with and one without an outboard wing-leading-edge modification, were modeled. The math models developed are presented simulation predictions and flight-test data for validation purposes and simulation results for the two configurations for various maneuvers and power settings are compared to show the beneficial influence of adding the wing-leading-edge modification.
NASA Technical Reports Server (NTRS)
Halpern, David; Chao, YI; Ma, Chung-Chun; Mechoso, Carlos R.
1995-01-01
The Pacanowski-Philander (PP) and Mellor-Yamada (MY) parameterization models of vertical mixing by turbulent processes were embedded in the Geophysical Fluid Dynamics Laboratory high-resolution ocean general circulation model of the tropical Pacific Ocean. All other facets of the numerical simulations were the same. Simulations were made for the 1987-1988 period. At the equator the MY simulation produced near-surface temperatures more uniform with depth, a deeper thermocline, a deeper core speed of the Equatorial Undercurrent, and a South Equatorial Current with greater vertical thickness compared with that computed with the PP method. Along 140 deg W, between 5 deg N and 10 deg N, both simulations were the same. Moored buoy current and temperature observations had been recorded by the Pacific Marine Environmental Laboratory at three sites (165 deg E, 140 deg W, 110 deg W) along the equator and at three sites (5 deg N, 7 deg N, 9 deg N) along 140 deg W. Simulated temperatures were lower than those observed in the near-surface layer and higher than those observed in the thermocline. Temperature simulations were in better agreement with observations compared to current simulations. At the equator, PP current and temperature simulations were more representative of the observations than MY simulations.
NASA Technical Reports Server (NTRS)
Majumdar, A. K.; Hedayat, A.
2015-01-01
This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects. The need for a generalized computer program for thermofluid analysis in a flow network has been felt for a long time in aerospace industries. Designers of thermofluid systems often need to know pressures, temperatures, flow rates, concentrations, and heat transfer rates at different parts of a flow circuit for steady state or transient conditions. Such applications occur in propulsion systems for tank pressurization, internal flow analysis of rocket engine turbopumps, chilldown of cryogenic tanks and transfer lines, and many other applications of gas-liquid systems involving fluid transients and conjugate heat and mass transfer. Computer resource requirements to perform time-dependent, three-dimensional Navier-Stokes computational fluid dynamic (CFD) analysis of such systems are prohibitive and therefore are not practical. Available commercial codes are generally suitable for steady state, single-phase incompressible flow. Because of the proprietary nature of such codes, it is not possible to extend their capability to satisfy the above-mentioned needs. Therefore, the Generalized Fluid System Simulation Program (GFSSP1) has been developed at NASA
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-05-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of
Budroni, M A
2015-12-01
Cross diffusion, whereby a flux of a given species entrains the diffusive transport of another species, can trigger buoyancy-driven hydrodynamic instabilities at the interface of initially stable stratifications. Starting from a simple three-component case, we introduce a theoretical framework to classify cross-diffusion-induced hydrodynamic phenomena in two-layer stratifications under the action of the gravitational field. A cross-diffusion-convection (CDC) model is derived by coupling the fickian diffusion formalism to Stokes equations. In order to isolate the effect of cross-diffusion in the convective destabilization of a double-layer system, we impose a starting concentration jump of one species in the bottom layer while the other one is homogeneously distributed over the spatial domain. This initial configuration avoids the concurrence of classic Rayleigh-Taylor or differential-diffusion convective instabilities, and it also allows us to activate selectively the cross-diffusion feedback by which the heterogeneously distributed species influences the diffusive transport of the other species. We identify two types of hydrodynamic modes [the negative cross-diffusion-driven convection (NCC) and the positive cross-diffusion-driven convection (PCC)], corresponding to the sign of this operational cross-diffusion term. By studying the space-time density profiles along the gravitational axis we obtain analytical conditions for the onset of convection in terms of two important parameters only: the operational cross-diffusivity and the buoyancy ratio, giving the relative contribution of the two species to the global density. The general classification of the NCC and PCC scenarios in such parameter space is supported by numerical simulations of the fully nonlinear CDC problem. The resulting convective patterns compare favorably with recent experimental results found in microemulsion systems. PMID:26764804
NASA Astrophysics Data System (ADS)
Budroni, M. A.
2015-12-01
Cross diffusion, whereby a flux of a given species entrains the diffusive transport of another species, can trigger buoyancy-driven hydrodynamic instabilities at the interface of initially stable stratifications. Starting from a simple three-component case, we introduce a theoretical framework to classify cross-diffusion-induced hydrodynamic phenomena in two-layer stratifications under the action of the gravitational field. A cross-diffusion-convection (CDC) model is derived by coupling the fickian diffusion formalism to Stokes equations. In order to isolate the effect of cross-diffusion in the convective destabilization of a double-layer system, we impose a starting concentration jump of one species in the bottom layer while the other one is homogeneously distributed over the spatial domain. This initial configuration avoids the concurrence of classic Rayleigh-Taylor or differential-diffusion convective instabilities, and it also allows us to activate selectively the cross-diffusion feedback by which the heterogeneously distributed species influences the diffusive transport of the other species. We identify two types of hydrodynamic modes [the negative cross-diffusion-driven convection (NCC) and the positive cross-diffusion-driven convection (PCC)], corresponding to the sign of this operational cross-diffusion term. By studying the space-time density profiles along the gravitational axis we obtain analytical conditions for the onset of convection in terms of two important parameters only: the operational cross-diffusivity and the buoyancy ratio, giving the relative contribution of the two species to the global density. The general classification of the NCC and PCC scenarios in such parameter space is supported by numerical simulations of the fully nonlinear CDC problem. The resulting convective patterns compare favorably with recent experimental results found in microemulsion systems.
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks.
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
Li Jiequan; Li Qibing; Xu Kun
2011-06-01
The generalized Riemann problem (GRP) scheme for the Euler equations and gas-kinetic scheme (GKS) for the Boltzmann equation are two high resolution shock capturing schemes for fluid simulations. The difference is that one is based on the characteristics of the inviscid Euler equations and their wave interactions, and the other is based on the particle transport and collisions. The similarity between them is that both methods can use identical MUSCL-type initial reconstructions around a cell interface, and the spatial slopes on both sides of a cell interface involve in the gas evolution process and the construction of a time-dependent flux function. Although both methods have been applied successfully to the inviscid compressible flow computations, their performances have never been compared. Since both methods use the same initial reconstruction, any difference is solely coming from different underlying mechanism in their flux evaluation. Therefore, such a comparison is important to help us to understand the correspondence between physical modeling and numerical performances. Since GRP is so faithfully solving the inviscid Euler equations, the comparison can be also used to show the validity of solving the Euler equations itself. The numerical comparison shows that the GRP exhibits a slightly better computational efficiency, and has comparable accuracy with GKS for the Euler solutions in 1D case, but the GKS is more robust than GRP. For the 2D high Mach number flow simulations, the GKS is absent from the shock instability and converges to the steady state solutions faster than the GRP. The GRP has carbuncle phenomena, likes a cloud hanging over exact Riemann solvers. The GRP and GKS use different physical processes to describe the flow motion starting from a discontinuity. One is based on the assumption of equilibrium state with infinite number of particle collisions, and the other starts from the non-equilibrium free transport process to evolve into an
NASA Astrophysics Data System (ADS)
Zarzycki, Colin M.; Jablonowski, Christiane
2014-09-01
Using a variable-resolution option within the National Center for Atmospheric Research/Department of Energy Community Atmosphere Model (CAM) Spectral Element (SE) global model, a refined nest at 0.25° (˜28 km) horizontal resolution located over the North Atlantic is embedded within a global 1° (˜111 km) grid. The grid is designed such that fine grid cells are located where tropical cyclones (TCs) are observed to occur during the Atlantic TC season (June-November). Two simulations are compared, one with refinement and one control case with no refinement (globally uniform 1° grid). Both simulations are integrated for 23 years using Atmospheric Model Intercomparison Protocols. TCs are tracked using an objective detection algorithm. The variable-resolution simulation produces significantly more TCs than the unrefined simulation. Storms that do form in the refined nest are much more intense, with multiple storms strengthening to Saffir-Simpson category 3 intensity or higher. Both count and spatial distribution of TC genesis and tracks in the variable-resolution simulation are well matched to observations and represent significant improvements over the unrefined simulation. Some degree of interannual skill is noted, with the variable-resolution grid able to reproduce the observed connection between Atlantic TCs and the El Niño-Southern Oscillation (ENSO). It is shown that Genesis Potential Index (GPI) is well matched between the refined and unrefined simulations, implying that the introduction of variable-resolution does not affect the synoptic environment. Potential "upscale" effects are noted in the variable-resolution simulation, suggesting stronger TCs in refined nests may play a role in meridional transport of momentum, heat, and moisture.
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Fogel, L. J.; Phelps, J. P.
1975-01-01
A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.
Fesen, C.G. ); Roble, R.G.; Ridley, E.C. )
1993-05-01
The authors use the National Center for Atmospheric Research (NCAR) thermosphere/ionosphere general circulation model (TIGCM) to model tides and dynamics in the thermosphere. This model incorporates the latest advances in the thermosphere general circulation model. Model results emphasized the 70[degree] W longitude region to overlap a series of incoherent radar scatter installations. Data and the model are available on data bases. The results of this theoretical modeling are compared with available data, and with prediction of more empirical models. In general there is broad agreement within the comparisons.
Fesen, C.G. ); Roble, R.G. )
1991-02-01
The National Center for Atmospheric Research thermosphere-ionosphere general circulation model (TIGCM) was used to simulate incoherent scatter radar observations of the lower thermosphere tides during the first Lower Thermosphere Coupling Study (LTCS) campaign, September 21-26, 1987. The TIGCM utilized time-varying histories of the model input fields obtained from the World Data Center for the LTCS period. These model inputs included solar flux, total hemispheric power, solar wind data from which the cross-polar-cap potential was derived, and geomagnetic K{sub p} index. Calculations were made for the semidiurnal ion temperatures and horizontal neutral winds at locations representative of Arecibo, Millstone Hill, and Sondrestrom. The diurnal tides at Sondrestrom were also simulated. Tidal inputs to the TIGCM lower boundary were obtained from the middle atmosphere model of Forbes and Vial (1989). The TIGCM tidal structures are in fair general agreement with the observations. The amplitudes tended to be better simulated than the phases, and the mid- and high-latitude locations are simulated better than the low-latitude thermosphere. This may indicate a need to incorporate coupling of the neutral atmosphere and ionosphere with the E region dynamo in the equatorial region to obtain a better representation of low-latitude thermospheric tides. The model simulations were used to investigate the daily variability of the tides due to the geomagnetic activity occurring during this period. In general, the ion temperatures were predicted to be affected more than the winds, and the diurnal components more than the semidiurnal. The effects are typically largest at high latitudes and higher altitudes, but discernible differences were produced at low latitudes.
Dahms, Rainer N.
2014-12-31
The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phasemore » components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of
Dahms, Rainer N.
2014-12-31
The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phase components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of previous
NASA Astrophysics Data System (ADS)
Kuroda, T.; Medvedev, A. S.; Kasaba, Y.; Hartogh, P.
2016-09-01
The CO2 snowfalls in winter polar atmosphere have been simulated by a MGCM. Our results show that they are strongly modulated by the synoptic dynamical features such as baroclinic planetary waves, as well as by gravity waves in smaller scale.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
NASA Technical Reports Server (NTRS)
Liu, W. Timothy; Tang, Qenqing; Atlas, Robert
1996-01-01
In this study, satellite observations, in situ measurements, and model simulations are combined to assess the oceanic response to surface wind forcing in the equatorial Pacific. The surface wind fields derived from observations by the spaceborne special sensor microwave imager (SSM/I) and from the operational products of the European Centre for Medium-Range Weather Forecasts (ECMWF) are compared. When SSM/I winds are used to force a primitive-equation ocean general circulation model (OGCM), they produce 3 C more surface cooling than ECMWF winds for the eastern equatorial Pacific during the cool phase of an El Nino-Southern Oscillation event. The stronger cooling by SSM/I winds is in good agreement with measurements at the moored buoys and observations by the advanced very high resolution radiometer, indicating that SSM/I winds are superior to ECMWF winds in forcing the tropical ocean. In comparison with measurements from buoys, tide gauges, and the Geosat altimeter, the OGCM simulates the temporal variations of temperature, steric, and sea level changes with reasonable realism when forced with the satellite winds. There are discrepancies between model simulations and observations that are common to both wind forcing fields, one of which is the simulation of zonal currents; they could be attributed to model deficiencies. By examining model simulations under two winds, vertical heat advection and uplifting of the thermocline are found to be the dominant factors in the anomalous cooling of the ocean mixed layer.
NASA Technical Reports Server (NTRS)
Ricoy, M. A.; Volakis, J. L.
1989-01-01
The diffraction problem associated with a multilayer material slab recessed in a perfectly conducting ground plane is formulated and solved via the Generalized Scattering Matrix Formulation (GSMF) in conjunction with the dual integral equation approach. The multilayer slab is replaced by a surface obeying a generalized impedance boundary condition (GIBC) to facilitate the computation of the pertinent Wiener Hopf split functions and their zeros. Both E(sub z) and H(sub z) polarizations are considered and a number of scattering patterns are presented, some of which are compared to exact results available for a homogeneous recessed slab.
Mirocha, J. D.; Kosovic, B.; Aitken, M. L.; Lundquist, J. K.
2014-01-10
A generalized actuator disk (GAD) wind turbine parameterization designed for large-eddy simulation (LES) applications was implemented into the Weather Research and Forecasting (WRF) model. WRF-LES with the GAD model enables numerical investigation of the effects of an operating wind turbine on and interactions with a broad range of atmospheric boundary layer phenomena. Numerical simulations using WRF-LES with the GAD model were compared with measurements obtained from the Turbine Wake and Inflow Characterization Study (TWICS-2011), the goal of which was to measure both the inflow to and wake from a 2.3-MW wind turbine. Data from a meteorological tower and two light-detection and ranging (lidar) systems, one vertically profiling and another operated over a variety of scanning modes, were utilized to obtain forcing for the simulations, and to evaluate characteristics of the simulated wakes. Simulations produced wakes with physically consistent rotation and velocity deficits. Two surface heat flux values of 20 W m^{–2} and 100 W m^{–2} were used to examine the sensitivity of the simulated wakes to convective instability. Simulations using the smaller heat flux values showed good agreement with wake deficits observed during TWICS-2011, whereas those using the larger value showed enhanced spreading and more-rapid attenuation. This study demonstrates the utility of actuator models implemented within atmospheric LES to address a range of atmospheric science and engineering applications. In conclusion, validated implementation of the GAD in a numerical weather prediction code such as WRF will enable a wide range of studies related to the interaction of wind turbines with the atmosphere and surface.
Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando
2009-08-13
Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.
Cariolle, D.; Lasserre-Bigorry, A.; Royer, J.F. ); Geleyn, J.F. )
1990-02-20
Ozone is treated as an interactive variable calculated by means of a continuity equation which takes account of advection and photochemical production and loss. The ozone concentration is also used to compute the heating and cooling rates due to the absorption of solar ultraviolet radiation, and the infrared emission in the stratosphere. The daytime ozone decrease due to the perturbed chlorine chemistry found at high southern latitudes is introduced as an extra loss in the ozone continuity equation. Results of the perturbed simulation show a very good agreement with the ozone measurements made during spring 1987. The simulation also shows the development of a high-latitude anomalous circulation, with a warming of the upper stratosphere resulting mainly from dynamical heating. In addition, a substantial ozone decrease is found at mid-latitudes in a thin stratospheric layer located between the 390 and the 470 K {theta} surfaces. A significant residual ozone decrease is found at the end of the model integration, 7 months after the final warming and the vortex breakdown. If there is a significant residual ozone decrease in the atmosphere, the ozone trends predicted by photochemical models which do not take into account the high-latitude perturbed chemistry are clearly inadequate. Finally, it is concluded that further model simulations at higher horizontal resolution, possibly with a better representation of the heterogeneous chemistry, will be needed to evaluate with more confidence the magnitude of the mid-latitudinal ozone depletion induced by the ozone hole formation.
ERIC Educational Resources Information Center
López-López, José Antonio; Botella, Juan; Sánchez-Meca, Julio; Marín-Martínez, Fulgencio
2013-01-01
Since heterogeneity between reliability coefficients is usually found in reliability generalization studies, moderator analyses constitute a crucial step for that meta-analytic approach. In this study, different procedures for conducting mixed-effects meta-regression analyses were compared. Specifically, four transformation methods for the…
Yang, Chun; Tang, Dalin; Atluri, Satya
2010-01-01
Cardiovascular disease (CVD) is becoming the number one cause of death worldwide. Atherosclerotic plaque rupture and progression are closely related to most severe cardiovascular syndromes such as heart attack and stroke. Mechanisms governing plaque rupture and progression are not well understood. A computational procedure based on three-dimensional meshless generalized finite difference (MGFD) method and serial magnetic resonance imaging (MRI) data was introduced to quantify patient-specific carotid atherosclerotic plaque growth functions and simulate plaque progression. Participating patients were scanned three times (T1, T2, and T3, at intervals of about 18 months) to obtain plaque progression data. Vessel wall thickness (WT) changes were used as the measure for plaque progression. Since there was insufficient data with the current technology to quantify individual plaque component growth, the whole plaque was assumed to be uniform, homogeneous, isotropic, linear, and nearly incompressible. The linear elastic model was used. The 3D plaque model was discretized and solved using a meshless generalized finite difference (GFD) method. Four growth functions with different combinations of wall thickness, stress, and neighboring point terms were introduced to predict future plaque growth based on previous time point data. Starting from the T2 plaque geometry, plaque progression was simulated by solving the solid model and adjusting wall thickness using plaque growth functions iteratively until T3 is reached. Numerically simulated plaque progression agreed very well with the target T3 plaque geometry with errors ranging from 11.56%, 6.39%, 8.24%, to 4.45%, given by the four growth functions. We believe this is the first time 3D plaque progression simulation based on multi-year patient-tracking data was reported. Serial MRI-based progression simulation adds time dimension to plaque vulnerability assessment and will improve prediction accuracy for potential plaque rupture
NASA Astrophysics Data System (ADS)
Erla Sveinbjornsdottir, Arny; Steen-Larsen, Hans Christian; Jonsson, Thorsteinn; Ritter, Francois; Riser, Camilla; Messon-Delmotte, Valerie; Bonne, Jean Louis; Dahl-Jensen, Dorthe
2014-05-01
During the fall of 2010 we installed an autonomous water vapor spectroscopy laser (Los Gatos Research analyzer) in a lighthouse on the Southwest coast of Iceland (63.83°N, 21.47°W). Despite initial significant problems with volcanic ash, high wind, and attack of sea gulls, the system has been continuously operational since the end of 2011 with limited down time. The system automatically performs calibration every 2 hours, which results in high accuracy and precision allowing for analysis of the second order parameter, d-excess, in the water vapor. We find a strong linear relationship between d-excess and local relative humidity (RH) when normalized to SST. The observed slope of approximately -45 o/oo/% is similar to theoretical predictions by Merlivat and Jouzel [1979] for smooth surface, but the calculated intercept is significant lower than predicted. Despite this good linear agreement with theoretical calculations, mismatches arise between the simulated seasonal cycle of water vapour isotopic composition using LMDZiso GCM nudged to large-scale winds from atmospheric analyses, and our data. The GCM is not able to capture seasonal variations in local RH, nor seasonal variations in d-excess. Based on daily data, the performance of LMDZiso to resolve day-to-day variability is measured based on the strength of the correlation coefficient between observations and model outputs. This correlation coefficient reaches ~0.8 for surface absolute humidity, but decreases to ~0.6 for δD and ~0.45 d-excess. Moreover, the magnitude of day-to-day humidity variations is also underestimated by LMDZiso, which can explain the underestimated magnitude of isotopic depletion. Finally, the simulated and observed d-excess vs. RH has similar slopes. We conclude that the under-estimation of d-excess variability may partly arise from the poor performance of the humidity simulations.
NASA Astrophysics Data System (ADS)
Roca-Fàbrega, Santi; Valenzuela, Octavio; Colín, Pedro; Figueras, Francesca; Krongold, Yair; Velázquez, Héctor; Avila-Reese, Vladimir; Ibarra-Medel, Hector
2016-06-01
We introduce a new set of simulations of Milky Way (MW)-sized galaxies using the AMR code ART + hydrodynamics in a Λ cold dark matter cosmogony. The simulation series is called GARROTXA and it follows the formation of a halo/galaxy from z = 60 to z = 0. The final virial mass of the system is ˜7.4 × 1011 M ⊙. Our results are as follows. (a) Contrary to many previous studies, the circular velocity curve shows no central peak and overall agrees with recent MW observations. (b) Other quantities, such as M\\_\\ast (6 × 1010 M ⊙) and R d (2.56 kpc), fall well inside the observational MW range. (c) We measure the disk-to-total ratio kinematically and find that D/T = 0.42. (d) The cold-gas fraction and star formation rate at z = 0, on the other hand, fall short of the values estimated for the MW. As a first scientific exploitation of the simulation series, we study the spatial distribution of hot X-ray luminous gas. We have found that most of this X-ray emitting gas is in a halo-like distribution accounting for an important fraction but not all of the missing baryons. An important amount of hot gas is also present in filaments. In all our models there is not a massive disk-like hot-gas distribution dominating the column density. Our analysis of hot-gas mock observations reveals that the homogeneity assumption leads to an overestimation of the total mass by factors of 3-5 or to an underestimation by factors of 0.7-0.1, depending on the used observational method. Finally, we confirm a clear correlation between the total hot-gas mass and the dark matter halo mass of galactic systems.
Viscous Overstability in Saturn's B-Ring. II. Hydrodynamic Theory and Comparison to Simulations
NASA Astrophysics Data System (ADS)
Schmidt, Jürgen; Salo, Heikki; Spahn, Frank; Petzschmann, Olaf
2001-10-01
We investigate the viscous oscillatory instability (overstability) of an unperturbed dense planetary ring, an instability that might play a role in the formation of radial structure in Saturn's B-ring. We generalize existing hydrodynamic models by including the heat flow equation in the analysis and compare our results to the development of overstable modes in local particle simulations. With the heat flow, in addition to the balance equations for mass and momentum, we take into account the balance law for the energy of the random motion; i.e., we allow for a thermal mode in a stability analysis of the stationary Keplerian flow. We also incorporate the effects of nonlocal transport of momentum and energy on the stability of the ring. In a companion paper (Salo, H., J. Schmidt, and F. Spahn 2001. Icarus, doi:10.1006/icar.2001.6680) we describe the determination of the local and nonlocal parts of the viscosity, the heat conductivity, the pressure, as well as the collisional cooling, together with their dependences on temperature and density, in local event-driven simulations of a planetary ring. The ring's self-gravity is taken into account in these simulations by an enhancement of the frequency of vertical oscillations Ω z>Ω. We use these values as parameters in our hydrodynamic model for the comparison to overstability in simulated rings of meter-sized inelastic particles of large optical depth with Ω z/Ω=3.6. We find that the inclusion of the energy-balance equation has a stabilizing influence on the overstable modes, shifting the stability boundary to higher optical depths, and moderating the growth rates of the instability, as compared to a purely isothermal treatment. The non-isothermal model predicts correctly the growth rates and oscillation frequencies of overstable modes in the simulations, as well as the phase shifts and relative amplitudes of the perturbations in density and radial and tangential velocity.
NASA Technical Reports Server (NTRS)
Wang, J.-T.; Gates, W. L.; Kim, J.-W.
1984-01-01
A three-year simulation which prescribes seasonally varying solar radiation and sea surface temperature is the basis of the present study of the horizontal structure of the balances of kinetic and total energy simulated by Oregon State University's two-level atmospheric general circulation model. Mechanisms responsible for the local energy changes are identified, and the energy balance requirement's fulfilment is examined. In January, the vertical integral of the total energy shows large amounts of external heating over the North Pacific and Atlantic, together with cooling over most of the land area of the Northern Hemisphere. In July, an overall seasonal reversal is found. Both seasons are also characterized by strong energy flux divergence in the tropics, in association with the poleward transport of heat and momentum.
NASA Astrophysics Data System (ADS)
Fujiwara, Hitoshi; Miyoshi, Yasunobu
2006-10-01
We have investigated characteristics of the large-scale traveling atmospheric disturbances (LS-TADs) generated during geomagnetically quiet and disturbed periods using a whole atmosphere general circulation model (GCM). The GCM simulations show that various TADs appear in association with passages of regions with large temperature gradients near the solar terminator, midnight temperature anomaly, and auroral oval which move with the Earth's rotation. These TADs, which are superimposed on each other, appear even when a geomagnetically quiet period. The TADs generated during a geomagnetically quiet period show structures extending in the longitudinal direction at high-latitude and in the latitudinal direction at mid- and low-latitude. These structures disappear after their short-range propagations. The TADs generated during a geomagnetically disturbed period show structures extending widely in the longitudinal direction and propagate from high- to low-latitude. These simulation results suggest the different generation mechanisms and features between the TADs generated during geomagnetically quiet and disturbed periods.
Mercado, Eduardo; Church, Barbara A
2016-08-01
Children with autism spectrum disorder (ASD) sometimes have difficulties learning categories. Past computational work suggests that such deficits may result from atypical representations in cortical maps. Here we use neural networks to show that idiosyncratic transformations of inputs can result in the formation of feature maps that impair category learning for some inputs, but not for other closely related inputs. These simulations suggest that large inter- and intra-individual variations in learning capacities shown by children with ASD across similar categorization tasks may similarly result from idiosyncratic perceptual encoding that is resistant to experience-dependent changes. If so, then both feedback- and exposure-based category learning should lead to heterogeneous, stimulus-dependent deficits in children with ASD. PMID:27193184
Ford, C.E.; March-Leuba, C. ); Guimaraes, L.; Ugolini, D. . Dept. of Nuclear Engineering)
1991-01-01
GOOSE, prototype software for a fully interactive, object-oriented simulation environment, is being developed as part of the Advanced Controls Program at Oak Ridge National Laboratory. Dynamic models may easily be constructed and tested; fully interactive capabilities allow the user to alter model parameters and complexity without recompilation. This environment provides access to powerful tools, such as numerical integration packages, graphical displays, and online help. Portability has bee an important design goal; the system was written in Objective-C in order to run on a wide variety of computers and operating systems, including UNIX workstations and personnel computers. A detailed library of nuclear reactor components, currently under development, will also be described. 5 refs., 4 figs.
Procacci, Piero
2016-06-27
We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac . PMID:27231982
H. Qin and X. Guan
2008-02-11
A variational symplectic integrator for the guiding-center motion of charged particles in general magnetic fields is developed for long-time simulation studies of magnetized plasmas. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing the dynamics. The variational symplectic integrator conserves exactly a discrete Lagrangian symplectic structure, and has better numerical properties over long integration time, compared with standard integrators, such as the standard and variable time-step fourth order Runge-Kutta methods.
Pasyanos, M; Ramirez, A; Franz, G
2005-02-04
Probabilistic inverse techniques, like the Markov Chain Monte Carlo (MCMC) algorithm, have had recent success in combining disparate data types into a consistent model. The Stochastic Engine (SE) initiative was a technique that developed this method and applied it to a number of earth science and national security applications. For instance, while the method was originally developed to solve ground flow problems (Aines et al.), it has also been applied to atmospheric modeling and engineering problems. The investigators of this proposal have applied the SE to regional-scale lithospheric earth models, which have applications to hazard analysis and nuclear explosion monitoring. While this broad applicability is appealing, tailoring the method for each application is inefficient and time-consuming. Stochastic methods invert data by probabilistically sampling the model space and comparing observations predicted by the proposed model to observed data and preferentially accepting models that produce a good fit, generating a posterior distribution. In other words, the method ''inverts'' for a model or, more precisely, a distribution of models, by a series of forward calculations. While powerful, the technique is often challenging to implement, as the mapping from model space to data needs to be ''customized'' for each data type. For example, all proposed models might need to be transformed through sensitivity kernels from 3-D models to 2-D models in one step in order to compute path integrals, and transformed in a completely different manner in the next step. We seek technical enhancements that widen the applicability of the Stochastic Engine by generalizing some aspects of the method (i.e. model-to-data transformation types, configuration, model representation). Initially, we wish to generalize the transformations that are necessary to match the observations to proposed models. These transformations are sufficiently general not to pertain to any single application. This
NASA Astrophysics Data System (ADS)
Dietze, Heiner; Löptien, Ulrike
2016-08-01
Deoxygenation in the Baltic Sea endangers fish yields and favours noxious algal blooms. Yet, vertical transport processes ventilating the oxygen-deprived waters at depth and replenishing nutrient-deprived surface waters (thereby fuelling export of organic matter to depth) are not comprehensively understood. Here, we investigate the effects of the interaction between surface currents and winds on upwelling in an eddy-rich general ocean circulation model of the Baltic Sea. Contrary to expectations we find that accounting for current-wind effects inhibits the overall vertical exchange between oxygenated surface waters and oxygen-deprived water at depth. At major upwelling sites, however (e.g. off the southern coast of Sweden and Finland) the reverse holds: the interaction between topographically steered surface currents with winds blowing over the sea results in a climatological sea surface temperature cooling of 0.5 K. This implies that current-wind effects drive substantial local upwelling of cold and nutrient-replete waters.
NASA Astrophysics Data System (ADS)
Huang, Whitney K.; Stein, Michael L.; McInerney, David J.; Sun, Shanshan; Moyer, Elisabeth J.
2016-07-01
Changes in extreme weather may produce some of the largest societal impacts of anthropogenic climate change. However, it is intrinsically difficult to estimate changes in extreme events from the short observational record. In this work we use millennial runs from the Community Climate System Model version 3 (CCSM3) in equilibrated pre-industrial and possible future (700 and 1400 ppm CO2) conditions to examine both how extremes change in this model and how well these changes can be estimated as a function of run length. We estimate changes to distributions of future temperature extremes (annual minima and annual maxima) in the contiguous United States by fitting generalized extreme value (GEV) distributions. Using 1000-year pre-industrial and future time series, we show that warm extremes largely change in accordance with mean shifts in the distribution of summertime temperatures. Cold extremes warm more than mean shifts in the distribution of wintertime temperatures, but changes in GEV location parameters are generally well explained by the combination of mean shifts and reduced wintertime temperature variability. For cold extremes at inland locations, return levels at long recurrence intervals show additional effects related to changes in the spread and shape of GEV distributions. We then examine uncertainties that result from using shorter model runs. In theory, the GEV distribution can allow prediction of infrequent events using time series shorter than the recurrence interval of those events. To investigate how well this approach works in practice, we estimate 20-, 50-, and 100-year extreme events using segments of varying lengths. We find that even using GEV distributions, time series of comparable or shorter length than the return period of interest can lead to very poor estimates. These results suggest caution when attempting to use short observational time series or model runs to infer infrequent extremes.
Ammi, Mehdi; Peyron, Christine
2016-12-01
Despite increasing popularity, quality improvement programs (QIP) have had modest and variable impacts on enhancing the quality of physician practice. We investigate the heterogeneity of physicians' preferences as a potential explanation of these mixed results in France, where the national voluntary QIP - the CAPI - has been cancelled due to its unpopularity. We rely on a discrete choice experiment to elicit heterogeneity in physicians' preferences for the financial and non-financial components of QIP. Using mixed and latent class logit models, results show that the two models should be used in concert to shed light on different aspects of the heterogeneity in preferences. In particular, the mixed logit demonstrates that heterogeneity in preferences is concentrated on the pay-for-performance component of the QIP, while the latent class model shows that physicians can be grouped in four homogeneous groups with specific preference patterns. Using policy simulation, we compare the French CAPI with other possible QIPs, and show that the majority of the physician subgroups modelled dislike the CAPI, while favouring a QIP using only non-financial interventions. We underline the importance of modelling preference heterogeneity in designing and implementing QIPs. PMID:27637834
NASA Technical Reports Server (NTRS)
Shen, B.-W.; Atlas, R.; Reale, O.; Lin, S.-J.; Chern, J.-D.; Chang, J.; Henze, C.
2006-01-01
Hurricane Katrina was the sixth most intense hurricane in the Atlantic. Katrina's forecast poses major challenges, the most important of which is its rapid intensification. Hurricane intensity forecast with General Circulation Models (GCMs) is difficult because of their coarse resolution. In this article, six 5-day simulations with the ultra-high resolution finite-volume GCM are conducted on the NASA Columbia supercomputer to show the effects of increased resolution on the intensity predictions of Katrina. It is found that the 0.125 degree runs give comparable tracks to the 0.25 degree, but provide better intensity forecasts, bringing the center pressure much closer to observations with differences of only plus or minus 12 hPa. In the runs initialized at 1200 UTC 25 AUG, the 0.125 degree simulates a more realistic intensification rate and better near-eye wind distributions. Moreover, the first global 0.125 degree simulation without convection parameterization (CP) produces even better intensity evolution and near-eye winds than the control run with CP.
Shabaev, Andrew; Lambrakos, Samuel G; Bernstein, Noam; Jacobs, Verne L; Finkenstadt, Daniel
2011-04-01
We have developed a general framework for numerical simulation of various types of scenarios that can occur for the detection of improvised explosive devices (IEDs) through the use of excitation using incident electromagnetic waves. A central component model of this framework is an S-matrix representation of a multilayered composite material system. Each layer of the system is characterized by an average thickness and an effective electric permittivity function. The outputs of this component are the reflectivity and the transmissivity as functions of frequency and angle of the incident electromagnetic wave. The input of the component is a parameterized analytic-function representation of the electric permittivity as a function of frequency, which is provided by another component model of the framework. The permittivity function is constructed by fitting response spectra calculated using density functional theory (DFT) and parameter adjustment according to any additional information that may be available, e.g., experimentally measured spectra or theory-based assumptions concerning spectral features. A prototype simulation is described that considers response characteristics for THz excitation of the high explosive β-HMX. This prototype simulation includes a description of a procedure for calculating response spectra using DFT as input to the Smatrix model. For this purpose, the DFT software NRLMOL was adopted.
Boyle, J.S.
1994-11-01
Divergence and convergence centers at 200 hPa and mean sea level pressure (MSLP) cyclones were located every 6 hr for a 10-yr general circulation model (GCM) simulation with the ECMWF (Cycle 36) for the boreal winters from 1980 to 1988. The simulation used the observed monthly mean sea surface temperature (SST) for the decade. Analysis of the frequency, location, and strength of these centers and cyclones gives insight into the dynamical response of the model to the varying SST. The results indicate that (1) the model produces reasonable climatologies of upper-level divergence and MSLP cyclones; (2) the model distribution of anomalies of divergence and convergence centers and MSLP cyclones is consistent with observations for the 1982-83 and 1986-87 El Nifio events; (3) the tropical Indian Ocean is the region of greatest divergence activity and interannual variability in the model; (4) the variability of the divergence centers is greater than that of the convergence centers; (5) strong divergence centers occur chiefly over the ocean in the midlatitudes but are more land-based in the tropics, except in the Indian Ocean; and (6) locations of divergence and convergence centers can be a useful tool for the intercomparison of global atmospheric simulations.
Yang, Chun; Tang, Dalin; Atluri, Satya
2011-01-01
Previously, we introduced a computational procedure based on three-dimensional meshless generalized finite difference (MGFD) method and serial magnetic resonance imaging (MRI) data to quantify patient-specific carotid atherosclerotic plaque growth functions and simulate plaque progression. Structure-only models were used in our previous report. In this paper, fluid-stricture interaction (FSI) was added to improve on prediction accuracy. One participating patient was scanned three times (T1, T2, and T3, at intervals of about 18 months) to obtain plaque progression data. Blood flow was assumed to laminar, Newtonian, viscous and incompressible. The Navier-Stokes equations with arbitrary Lagrangian-Eulerian (ALE) formulation were used as the governing equations. Plaque material was assumed to be uniform, homogeneous, isotropic, linear, and nearly incompressible. The linear elastic model was used. The 3D FSI plaque model was discretized and solved using a meshless generalized finite difference (GFD) method. Growth functions with a) morphology alone; b) morphology and plaque wall stress (PWS); morphology and flow shear stress (FSS), and d) morphology, PWS and FSS were introduced to predict future plaque growth based on previous time point data. Starting from the T2 plaque geometry, plaque progression was simulated by solving the FSI model and adjusting plaque geometry using plaque growth functions iteratively until T3 is reached. Numerically simulated plaque progression agreed very well with the target T3 plaque geometry with errors ranging from 8.62%, 7.22%, 5.77% and 4.39%, with the growth function including morphology, plaque wall stress and flow shear stress terms giving the best predictions. Adding flow shear stress term to the growth function improved the prediction error from 7.22% to 4.39%, a 40% improvement. We believe this is the first time 3D plaque progression FSI simulation based on multi-year patient-tracking data was reported. Serial MRI-based progression
NASA Technical Reports Server (NTRS)
Kahre, Melinda A.; Haberle, Robert; Hollingsworth, Jeffery L.
2012-01-01
The dust cycle is critically important for the current climate of Mars. The radiative effects of dust impact the thermal and dynamical state of the atmosphere [1,2,3]. Although dust is present in the Martian atmosphere throughout the year, the level of dustiness varies with season. The atmosphere is generally the dustiest during northern fall and winter and the least dusty during northern spring and summer [4]. Dust particles are lifted into the atmosphere by dust storms that range in size from meters to thousands of kilometers across [5]. Regional storm activity is enhanced before northern winter solstice (Ls200 degrees - 240 degrees), and after northern solstice (Ls305 degrees - 340 degrees ), which produces elevated atmospheric dust loadings during these periods [5,6,7]. These pre- and post- solstice increases in dust loading are thought to be associated with transient eddy activity in the northern hemisphere with cross-equatorial transport of dust leading to enhanced dust lifting in the southern hemisphere [6]. Interactive dust cycle studies with Mars General Circulation Models (MGCMs) have included the lifting, transport, and sedimentation of radiatively active dust. Although the predicted global dust loadings from these simulations capture some aspects of the observed dust cycle, there are marked differences between the simulated and observed dust cycles [8,9,10]. Most notably, the maximum dust loading is robustly predicted by models to occur near northern winter solstice and is due to dust lifting associated with down slope flows on the flanks of the Hellas basin. Thus far, models have had difficulty simulating the observed pre- and post- solstice peaks in dust loading.
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Fujita, Shin; Takebe, Masahiro; Ushio, Wataru; Shimoyama, Hiroshi
2010-01-01
The paraxial trajectory method has been generalized for the application to the cathode rays inside electron guns. The generalized method can handle rays that initially make a large angle with the optical axis with a satisfactory accuracy. The key to success of the generalization is the adoption of the trigonometric function sine for the trajectory slope specification, instead of the conventional use of the tangent. Formulas have been derived to relate the ray conditions (position and slope of the ray at reference planes) on the cathode to those at the crossover plane using third-order polynomial functions. Some of the polynomial coefficients can be used as the optical parameters in the characterization of electron sources; the electron gun focal length gives a quantitative estimate of both the crossover size and the angular current intensity. An electron gun simulation program G-optk has been developed based on the mathematical formulations presented in the article. The program calculates the principal paraxial trajectories and the relevant optical parameters from axial potentials and fields. It gives the electron-optical-column designers a clear physical picture of the electron gun in a much more faster way than the conventional ray-tracing methods.
NASA Technical Reports Server (NTRS)
Crumrine, R. J.
1976-01-01
An investigation of the effects of various lateral course widths and runway lengths for manual CAT I Microwave Landing System instrument approaches was carried out with instrument rated pilots in a General Aviation simulator. Data are presented on the lateral dispersion at the touchdown zone, and the middle and outer markers, for approaches to 3,000, 8,000 (and trial 12,000 foot) runway lengths with full scale angular lateral course widths of + or - 1.19 deg, + or - 2.35 deg, and + or - 3.63 deg. The distance from touchdown where the localizer deviation went to full scale was also recorded. Pilot acceptance was measured according to the Cooper-Harper rating system.
Rutherford, Helena J.V.; Goldberg, Benjamin; Luyten, Patrick; Bridgett, David J.; Mayes, Linda C.
2013-01-01
Parental reflective functioning represents the capacity of a parent to think about their own and their child’s mental states and how these mental states may influence behavior. Here we examined whether this capacity as measured by the Parental Reflective Functioning Questionnaire relates to tolerance of infant distress by asking mothers (N=21) to soothe a life-like baby simulator (BSIM) that was inconsolable, crying for a fixed time period unless the mother chose to stop the interaction. Increasing maternal interest and curiosity in their child’s mental states, a key feature of parental reflective functioning, was associated with longer persistence times with the BSIM. Importantly, on a non-parent distress tolerance task, parental reflective functioning was not related to persistence times. These findings suggest that parental reflective functioning may be related to tolerance of infant distress, but not distress tolerance more generally, and thus may reflect specificity to parenting-specific persistence behavior. PMID:23906942
NASA Technical Reports Server (NTRS)
Roble, R. G.; Ridley, E. C.
1994-01-01
A new simulation model of the mesosphere, thermosphere, and ionosphere with coupled electrodynamics has been developed and used to calculate the global circulation, temperature and compositional structure between 30-500 km for equinox, solar cycle minimum, geomagnetic quiet conditions. The model incorporates all of the features of the National Center for Atmospheric Research (NCAR) thermosphere-ionosphere- electrodynamics general circulation model (TIE-GCM) but the lower boundary has been extended downward from 97 to 30 km (10 mb) and it includes the physical and chemical processes appropriate for the mesosphere and upper stratosphere. The first simulation used Rayleigh friction to represent gravity wave drag in the middle atmosphere and although it was able to close the mesospheric jets it severely damped the diurnal tide. Reduced Rayleigh friction allowed the tide to penetrate to thermospheric heights but did not close the jets. A gravity wave parameterization developed by Fritts and Lu (1993) allows both features to exist simultaneously with the structure of tides and mean flow dependent upon the strength of the gravity wave source. The model calculates a changing dynamic structure with the mean flow and diurnal tide dominant in the mesosphere, the in-situ generated semi-diurnal tide dominating the lower thermosphere and an in-situ generated diurnal tide in the upper thermosphere. The results also show considerable interaction between dynamics and composition, especially atomic oxygen between 85 and 120 km.
NASA Astrophysics Data System (ADS)
Kroonblawd, Matthew P.; Mathew, Nithin; Jiang, Shan; Sewell, Thomas D.
2016-10-01
A Generalized Crystal-Cutting Method (GCCM) is developed that automates construction of three-dimensionally periodic simulation cells containing arbitrarily oriented single crystals and thin films, two-dimensionally (2D) infinite crystal-crystal homophase and heterophase interfaces, and nanostructures with intrinsic N-fold interfaces. The GCCM is based on a simple mathematical formalism that facilitates easy definition of constraints on cut crystal geometries. The method preserves the translational symmetry of all Bravais lattices and thus can be applied to any crystal described by such a lattice including complicated, low-symmetry molecular crystals. Implementations are presented with carefully articulated combinations of loop searches and constraints that drastically reduce computational complexity compared to simple loop searches. Orthorhombic representations of monoclinic and triclinic crystals found using the GCCM overcome some limitations in standard distributions of popular molecular dynamics software packages. Stability of grain boundaries in β-HMX was investigated using molecular dynamics and molecular statics simulations with 2D infinite crystal-crystal homophase interfaces created using the GCCM. The order of stabilities for the four grain boundaries studied is predicted to correlate with the relative prominence of particular crystal faces in lab-grown β-HMX crystals. We demonstrate how nanostructures can be constructed through simple constraints applied in the GCCM framework. Example GCCM constructions are shown that are relevant to some current problems in materials science, including shock sensitivity of explosives, layered electronic devices, and pharmaceuticals.
Roble, R.G.; Ridley, E.C.
1994-03-15
A new simulation model of the mesosphere, thermosphere, and ionosphere with coupled electrodynamics has been developed and used to calculate the global circulation, temperature and compositional structure between 30-500 km for equinox, solar cycle minimum, geomagnetic quiet conditions. The model incorporates all of the features of the NCAR thermosphere-ionosphere-electrodynamics general circulation model (TIE-GCM) but the lower boundary has been extended downward from 97 to 30 km (10 mb) and it includes the physical and chemical processes appropriate for the mesosphere and upper stratosphere. The first simulation used Rayleigh friction to represent gravity wave drag in the middle atmosphere and although it was able to close the mesospheric jets it severely damped the diurnal tide. Reduced Rayleigh friction allowed the tide to penetrate to thermospheric heights but did not close the jets. A gravity wave parameterization developed by Fritts and Lu allows both features to exist simultaneously with the structure of tides and mean flow dependent upon the strength of the gravity wave source. The model calculates a changing dynamic structure with the mean flow and diurnal tide dominant in the mesosphere, the in-situ generated semi-diurnal tide dominating the lower thermosphere and an in-situ generated diurnal tide in the upper thermosphere. The results also show considerable interaction between dynamics and composition, especially atomic oxygen between 85 and 120 km. 31 refs., 3 figs.
Fujisawa, Tomochika; Barraclough, Timothy G
2013-09-01
DNA barcoding-type studies assemble single-locus data from large samples of individuals and species, and have provided new kinds of data for evolutionary surveys of diversity. An important goal of many such studies is to delimit evolutionarily significant species units, especially in biodiversity surveys from environmental DNA samples. The Generalized Mixed Yule Coalescent (GMYC) method is a likelihood method for delimiting species by fitting within- and between-species branching models to reconstructed gene trees. Although the method has been widely used, it has not previously been described in detail or evaluated fully against simulations of alternative scenarios of true patterns of population variation and divergence between species. Here, we present important reformulations to the GMYC method as originally specified, and demonstrate its robustness to a range of departures from its simplifying assumptions. The main factor affecting the accuracy of delimitation is the mean population size of species relative to divergence times between them. Other departures from the model assumptions, such as varying population sizes among species, alternative scenarios for speciation and extinction, and population growth or subdivision within species, have relatively smaller effects. Our simulations demonstrate that support measures derived from the likelihood function provide a robust indication of when the model performs well and when it leads to inaccurate delimitations. Finally, the so-called single-threshold version of the method outperforms the multiple-threshold version of the method on simulated data: we argue that this might represent a fundamental limit due to the nature of evidence used to delimit species in this approach. Together with other studies comparing its performance relative to other methods, our findings support the robustness of GMYC as a tool for delimiting species when only single-locus information is available.
NASA Astrophysics Data System (ADS)
Rast, S.; Fries, P. H.; Belorizky, E.; Borel, A.; Helm, L.; Merbach, A. E.
2001-10-01
The time correlation functions of the electronic spin components of a metal ion without orbital degeneracy in solution are computed. The approach is based on the numerical solution of the time-dependent Schrödinger equation for a stochastic perturbing Hamiltonian which is simulated by a Monte Carlo algorithm using discrete time steps. The perturbing Hamiltonian is quite general, including the superposition of both the static mean crystal field contribution in the molecular frame and the usual transient ligand field term. The Hamiltonian of the static crystal field can involve the terms of all orders, which are invariant under the local group of the average geometry of the complex. In the laboratory frame, the random rotation of the complex is the only source of modulation of this Hamiltonian, whereas an additional Ornstein-Uhlenbeck process is needed to describe the time fluctuations of the Hamiltonian of the transient crystal field. A numerical procedure for computing the electronic paramagnetic resonance (EPR) spectra is proposed and discussed. For the [Gd(H2O)8]3+ octa-aqua ion and the [Gd(DOTA)(H2O)]- complex [DOTA=1,4,7,10-tetrakis(carboxymethyl)-1,4,7,10-tetraazacyclo dodecane] in water, the predictions of the Redfield relaxation theory are compared with those of the Monte Carlo approach. The Redfield approximation is shown to be accurate for all temperatures and for electronic resonance frequencies at and above X-band, justifying the previous interpretations of EPR spectra. At lower frequencies the transverse and longitudinal relaxation functions derived from the Redfield approximation display significantly faster decays than the corresponding simulated functions. The practical interest of this simulation approach is underlined.
A New Simulation Technique for Study of Collisionless Shocks: Self-Adaptive Simulations
Karimabadi, H.; Omelchenko, Y.; Driscoll, J.; Krauss-Varban, D.; Fujimoto, R.; Perumalla, K.
2005-08-01
The traditional technique for simulating physical systems modeled by partial differential equations is by means of time-stepping methodology where the state of the system is updated at regular discrete time intervals. This method has inherent inefficiencies. In contrast to this methodology, we have developed a new asynchronous type of simulation based on a discrete-event-driven (as opposed to time-driven) approach, where the simulation state is updated on a 'need-to-be-done-only' basis. Here we report on this new technique, show an example of particle acceleration in a fast magnetosonic shockwave, and briefly discuss additional issues that we are addressing concerning algorithm development and parallel execution.
NASA Technical Reports Server (NTRS)
Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.
2010-01-01
The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.
Doostparast Torshizi, Abolfazl; Fazel Zarandi, Mohammad Hossein
2015-09-01
This paper considers microarray gene expression data clustering using a novel two stage meta-heuristic algorithm based on the concept of α-planes in general type-2 fuzzy sets. The main aim of this research is to present a powerful data clustering approach capable of dealing with highly uncertain environments. In this regard, first, a new objective function using α-planes for general type-2 fuzzy c-means clustering algorithm is represented. Then, based on the philosophy of the meta-heuristic optimization framework 'Simulated Annealing', a two stage optimization algorithm is proposed. The first stage of the proposed approach is devoted to the annealing process accompanied by its proposed perturbation mechanisms. After termination of the first stage, its output is inserted to the second stage where it is checked with other possible local optima through a heuristic algorithm. The output of this stage is then re-entered to the first stage until no better solution is obtained. The proposed approach has been evaluated using several synthesized datasets and three microarray gene expression datasets. Extensive experiments demonstrate the capabilities of the proposed approach compared with some of the state-of-the-art techniques in the literature.
Hrabovský, Miroslav
2014-01-01
The purpose of the study is to show a proposal of an extension of a one-dimensional speckle correlation method, which is primarily intended for determination of one-dimensional object's translation, for detection of general in-plane object's translation. In that view, a numerical simulation of a displacement of the speckle field as a consequence of general in-plane object's translation is presented. The translation components ax and ay representing the projections of a vector a of the object's displacement onto both x- and y-axes in the object plane (x, y) are evaluated separately by means of the extended one-dimensional speckle correlation method. Moreover, one can perform a distinct optimization of the method by reduction of intensity values representing detected speckle patterns. The theoretical relations between the translation components ax and ay of the object and the displacement of the speckle pattern for selected geometrical arrangement are mentioned and used for the testifying of the proposed method's rightness. PMID:24592180
Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V
2014-06-01
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health
Event tunnel: exploring event-driven business processes.
Suntinger, Martin; Obweger, Hannes; Schiefer, Josef; Gröller, M Eduard
2008-01-01
Event-based systems monitor business processes in real time. The event-tunnel visualization sees the stream of events captured from such systems as a cylindrical tunnel. The tunnel allows for back-tracing business incidents and exploring event patterns' root causes. The authors couple this visualization with tools that let users search for relevant events within a data repository.
NASA Astrophysics Data System (ADS)
Deca, Jan; Divin, Andrey; Lembège, Bertrand; Horányi, Mihály; Markidis, Stefano; Lapenta, Giovanni
2015-08-01
We present a general model of the solar wind interaction with a dipolar lunar crustal magnetic anomaly (LMA) using three-dimensional full-kinetic and electromagnetic simulations. We confirm that LMAs may indeed be strong enough to stand off the solar wind from directly impacting the lunar surface, forming a so-called "minimagnetosphere," as suggested by spacecraft observations and theory. We show that the LMA configuration is driven by electron motion because its scale size is small with respect to the gyroradius of the solar wind ions. We identify a population of back-streaming ions, the deflection of magnetized electrons via the E × B drift motion, and the subsequent formation of a halo region of elevated density around the dipole source. Finally, it is shown that the presence and efficiency of the processes are heavily impacted by the upstream plasma conditions and, on their turn, influence the overall structure and evolution of the LMA system. Understanding the detailed physics of the solar wind interaction with LMAs, including magnetic shielding, particle dynamics and surface charging is vital to evaluate its implications for lunar exploration.
NASA Technical Reports Server (NTRS)
Huggett, Daniel J.; Majumdar, Alok
2013-01-01
Cryogenic propellants are readily heated when used. This poses a problem for rocket engine efficiency and effective boot-strapping of the engine, as seen in the "hot" LOX (Liquid Oxygen) problem on the S-1 stage of the Saturn vehicle. In order to remedy this issue, cryogenic fluids were found to be sub-cooled by injection of a warm non-condensing gas. Experimental results show that the mechanism behind the sub-cooling is evaporative cooling. It has been shown that a sub-cooled temperature difference of approximately 13 deg F below saturation temperature [1]. The phenomenon of sub-cooling of cryogenic propellants by a non-condensing gas is not readily available with the General Fluid System Simulation Program (GFSSP) [2]. GFSSP is a thermal-fluid program used to analyze a wide variety of systems that are directly impacted by thermodynamics and fluid mechanics. In order to model this phenomenon, additional capabilities had to be added to GFSSP in the form of a FORTRAN coded sub-routine to calculate the temperature of the sub-cooled fluid. Once this was accomplished, the sub-routine was implemented to a GFSSP model that was created to replicate an experiment that was conducted to validate the GFSSP results.
DH Bacon; MD White; BP McGrail
2000-03-07
The Hanford Site, in southeastern Washington State, has been used extensively to produce nuclear materials for the US strategic defense arsenal by the Department of Energy (DOE) and its predecessors, the US Atomic Energy Commission and the US Energy Research and Development Administration. A large inventory of radioactive and mixed waste has accumulated in 177 buried single- and double shell tanks. Liquid waste recovered from the tanks will be pretreated to separate the low-activity fraction from the high-level and transuranic wastes. Vitrification is the leading option for immobilization of these wastes, expected to produce approximately 550,000 metric tons of Low Activity Waste (LAW) glass. This total tonnage, based on nominal Na{sub 2}O oxide loading of 20% by weight, is destined for disposal in a near-surface facility. Before disposal of the immobilized waste can proceed, the DOE must approve a performance assessment, a document that described the impacts, if any, of the disposal facility on public health and environmental resources. Studies have shown that release rates of radionuclides from the glass waste form by reaction with water determine the impacts of the disposal action more than any other independent parameter. This report describes the latest accomplishments in the development of a computational tool, Subsurface Transport Over Reactive Multiphases (STORM), Version 2, a general, coupled non-isothermal multiphase flow and reactive transport simulator. The underlying mathematics in STORM describe the rate of change of the solute concentrations of pore water in a variably saturated, non-isothermal porous medium, and the alteration of waste forms, packaging materials, backfill, and host rocks.
NASA Astrophysics Data System (ADS)
Koval, Andrey; Gavrilov, Nikolai; Pogoreltsev, Alexander; Savenkova, Elena
2016-04-01
One of the important factors of dynamical interactions between the lower and upper atmosphere is energy and momentum transfer by atmospheric internal gravity waves. For numerical modeling of the general circulation and thermal regime of the middle and upper atmosphere, it is important to take into account accelerations of the mean flow and heating rates produced by dissipating internal waves. The quasi-biennial oscillations (QBOs) of the zonal mean flow at lower latitudes at stratospheric heights can affect the propagation conditions of planetary waves. We perform numerical simulation of global atmospheric circulation for the initial conditions corresponding to the years with westerly and easterly QBO phases. We focus on the changes in amplitudes of stationary planetary waves (SPWs) and traveling normal atmospheric modes (NAMs) in the atmosphere during SSW events for the different QBO phases. For these experiments, we use the global circulation of the middle and upper atmosphere model (MUAM). There is theory of PW waveguide describing atmospheric regions where the background wind and temperature allow the wave propagation. There were introduced the refractive index for PWs and found that strongest planetary wave propagation is in areas of large positive values of this index. Another important PW characteristic is the Eliassen-Palm flux (EP-flux). These characteristics are considered as useful tools for visualizing the PW propagation conditions. Sudden stratospheric warming (SSW) event has significant influence on the formation of the weather anomalous and climate changes in the troposphere. Also, SSW event may affect the dynamical and energy processes in the upper atmosphere. The major SSW events imply significant temperature rises (up to 30 - 40 K) at altitudes 30 - 50 km accompanying with corresponding decreases, or reversals, of climatological eastward zonal winds in the stratosphere.
NASA Astrophysics Data System (ADS)
Kuroda, Takeshi; Medvedev, Alexander; Yiğit, Erdal; Hartogh, Paul
2016-10-01
Gravity waves (GWs) are small-scale atmospheric waves generated by various geophysical processes, such as topography, convection, and dynamical instability. On Mars, several observations and simulations have revealed that GWs strongly affect temperature and wind fields in the middle and upper atmosphere. We have worked with a high-resolution Martian general circulation model (MGCM), with the spectral resolution of T106 (horizontal grid interval of ~67 km), for the investigations of generation and propagation of GWs. We analyzed for three kinds of wavelength ranges, (1) horizontal total wavenumber s=21-30 (wavelength λ~700-1000 km), (2) s=31-60 (λ~350-700 km), and (3) s=61-106 (λ~200-350 km). Our results show that shorter-scale harmonics progressively dominate with height during both equinox and solstice. We have detected two main sources of GWs: mountainous regions and the meandering winter polar jet. In both seasons GW energy in the troposphere due to the shorter-scale harmonics is concentrated in the low latitudes in a good agreement with observations. Orographically-generated GWs contribute significantly to the total energy of disturbances, and strongly decay with height. Thus, the non-orographic GWs of tropospheric origin dominate near the mesopause. The vertical fluxes of wave horizontal momentum are directed mainly against the larger-scale wind. Mean magnitudes of the drag in the middle atmosphere are tens of m s-1 sol-1, while instantaneously they can reach thousands of m s-1 sol-1, which results in an attenuation of the wind jets in the middle atmosphere and in tendency of their reversal.
NASA Astrophysics Data System (ADS)
Bergant, Klemen; Kajfež-Bogataj, Lučka; Črepinšek, Zalika
2002-02-01
Phenological observations are a valuable source of information for investigating the relationship between climate variation and plant development. Potential climate change in the future will shift the occurrence of phenological phases. Information about future climate conditions is needed in order to estimate this shift. General circulation models (GCM) provide the best information about future climate change. They are able to simulate reliably the most important mean features on a large scale, but they fail on a regional scale because of their low spatial resolution. A common approach to bridging the scale gap is statistical downscaling, which was used to relate the beginning of flowering of Taraxacum officinale in Slovenia with the monthly mean near-surface air temperature for January, February and March in Central Europe. Statistical models were developed and tested with NCAR/NCEP Reanalysis predictor data and EARS predictand data for the period 1960-1999. Prior to developing statistical models, empirical orthogonal function (EOF) analysis was employed on the predictor data. Multiple linear regression was used to relate the beginning of flowering with expansion coefficients of the first three EOF for the Janauary, Febrauary and March air temperatures, and a strong correlation was found between them. Developed statistical models were employed on the results of two GCM (HadCM3 and ECHAM4/OPYC3) to estimate the potential shifts in the beginning of flowering for the periods 1990-2019 and 2020-2049 in comparison with the period 1960-1989. The HadCM3 model predicts, on average, 4 days earlier occurrence and ECHAM4/OPYC3 5 days earlier occurrence of flowering in the period 1990-2019. The analogous results for the period 2020-2049 are a 10- and 11-day earlier occurrence.
NASA Technical Reports Server (NTRS)
Schulman, Richard; Kirk, Daniel; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul
2013-01-01
The SPHERES Slosh Experiment (SSE) is a free floating experimental platform developed for the acquisition of long duration liquid slosh data aboard the International Space Station (ISS). The data sets collected will be used to benchmark numerical models to aid in the design of rocket and spacecraft propulsion systems. Utilizing two SPHERES Satellites, the experiment will be moved through different maneuvers designed to induce liquid slosh in the experiment's internal tank. The SSE has a total of twenty-four thrusters to move the experiment. In order to design slosh generating maneuvers, a parametric study with three maneuvers types was conducted using the General Moving Object (GMO) model in Flow-30. The three types of maneuvers are a translation maneuver, a rotation maneuver and a combined rotation translation maneuver. The effectiveness of each maneuver to generate slosh is determined by the deviation of the experiment's trajectory as compared to a dry mass trajectory. To fully capture the effect of liquid re-distribution on experiment trajectory, each thruster is modeled as an independent force point in the Flow-3D simulation. This is accomplished by modifying the total number of independent forces in the GMO model from the standard five to twenty-four. Results demonstrate that the most effective slosh generating maneuvers for all motions occurs when SSE thrusters are producing the highest changes in SSE acceleration. The results also demonstrate that several centimeters of trajectory deviation between the dry and slosh cases occur during the maneuvers; while these deviations seem small, they are measureable by SSE instrumentation.
Technology Transfer Automated Retrieval System (TEKTRAN)
A general regression neural network and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, Hadar), temperature (5 to 50C) and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural r...
NASA Astrophysics Data System (ADS)
Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc
2016-02-01
The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.
NASA Astrophysics Data System (ADS)
Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.
2012-04-01
-up and utilize the CCE has been implemented by the project Collaborative, Complex, and Critical Decision Processes in Evolving Crises (TRIDEC) funded under the European Union's FP7. TRIDEC focuses on real-time intelligent information management in Earth management. The addressed challenges include the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulations and data fusion tools. Additionally, TRIDEC adopts enhancements of Service Oriented Architecture (SOA) principles in terms of Event Driven Architecture (EDA) design. As a next step the implemented CCE's services to generate derived and customized simulation products are foreseen to be provided via an EDA service for on-demand processing for specific threat-parameters and to accommodate for model improvements.
... to your desktop! more... What Is a General Dentist? Article Chapters What Is a General Dentist? General ... Reviewed: January 2012 ?xml:namespace> Related Articles: General Dentists FAGD and MAGD: What Do These Awards Mean? ...
NASA Astrophysics Data System (ADS)
Edgemon, G. L.; Danielson, M. J.; Bell, G. E. C.
1997-06-01
Underground waste tanks fabricated from mild steel store more than 253 million liters of high level radioactive waste from 50 years of weapons production at the Hanford Site. The probable modes of corrosion failures are reported as nitrate stress corrosion cracking and pitting. In an effort to develop a waste tank corrosion monitoring system, laboratory tests were conducted to characterize electrochemical noise data for both uniform and localized corrosion of mild steel and other materials in simulated waste environments. The simulated waste solutions were primarily composed of ammonium nitrate or sodium nitrate and were held at approximately 97°C. The electrochemical noise of freely corroding specimens was monitored, recorded and analyzed for periods ranging between 10 and 500 h. At the end of each test period, the specimens were examined to correlate electrochemical noise data with corrosion damage. Data characteristic of uniform corrosion and stress corrosion cracking are presented.
Beard, J.N. Jr.; Rice, W.T. Jr.
1980-01-01
A project to develop a mathematical model capable of simulating the activities in a typical batch dyeing process in the textile industry is described. The model could be used to study the effects of changes in dye-house operations, and to determine effective guidelines for optimal dyehouse performance. The computer model is of a hypothetical dyehouse. The appendices contain a listing of the computer program, sample computer inputs and outputs, and instructions for using the model. (MCW)
Boyle, J.S. )
1994-01-01
Divergence and convergence centers at 200 hPa and mean sea level pressure (MSLP) cyclones are located every 6 hours for a 10-year GCM simulation for the boreal winters from 1980 to 1988. The simulation used the observed monthly mean SST for the decade. Analysis of the frequency, locations, and strengths of these centers and cyclones give insight into the dynamical response of the model to the varying SST. IT is found that (1) the model produces reasonable climatologies of upper-level divergence and MSLP cyclones. (2) The model distribution of anomalies of divergence/convergence centers and MSLP cyclones is consistent with available observations for the 1982-83 and 2986-87 El Nino events. (3) The tropical Indian Ocean is the region of greatest divergence activity and interannual variability in the model. (4) The variability of the divergence centers is greater than that of the convergence centers. (5) Strong divergence centers are chiefly oceanic events in the midlatitudes but are more land based in the tropics, except in the Indian. (6) Locations of divergence/convergence centers can be a useful tool for the intercomparison of global atmospheric simulations.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.
2016-01-01
Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We
Reynolds, Robert F.; Bauerle, William L.; Wang, Ying
2009-01-01
Background and Aims Deciduous trees have a seasonal carbon dioxide exchange pattern that is attributed to changes in leaf biochemical properties. However, it is not known if the pattern in leaf biochemical properties – maximum Rubisco carboxylation (Vcmax) and electron transport (Jmax) – differ between species. This study explored whether a general pattern of changes in Vcmax, Jmax, and a standardized soil moisture response accounted for carbon dioxide exchange of deciduous trees throughout the growing season. Methods The model MAESTRA was used to examine Vcmax and Jmax of leaves of five deciduous trees, Acer rubrum ‘Summer Red’, Betula nigra, Quercus nuttallii, Quercus phellos and Paulownia elongata, and their response to soil moisture. MAESTRA was parameterized using data from in situ measurements on organs. Linking the changes in biochemical properties of leaves to the whole tree, MAESTRA integrated the general pattern in Vcmax and Jmax from gas exchange parameters of leaves with a standardized soil moisture response to describe carbon dioxide exchange throughout the growing season. The model estimates were tested against measurements made on the five species under both irrigated and water-stressed conditions. Key Results Measurements and modelling demonstrate that the seasonal pattern of biochemical activity in leaves and soil moisture response can be parameterized with straightforward general relationships. Over the course of the season, differences in carbon exchange between measured and modelled values were within 6–12 % under well-watered conditions and 2–25 % under water stress conditions. Hence, a generalized seasonal pattern in the leaf-level physiological change of Vcmax and Jmax, and a standardized response to soil moisture was sufficient to parameterize carbon dioxide exchange for large-scale evaluations. Conclusions Simplification in parameterization of the seasonal pattern of leaf biochemical activity and soil moisture response of
Sali, A; Blundell, T L
1990-03-20
A protein is defined as an indexed string of elements at each level in the hierarchy of protein structure: sequence, secondary structure, super-secondary structure, etc. The elements, for example, residues or secondary structure segments such as helices or beta-strands, are associated with a series of properties and can be involved in a number of relationships with other elements. Element-by-element dissimilarity matrices are then computed and used in the alignment procedure based on the sequence alignment algorithm of Needleman & Wunsch, expanded by the simulated annealing technique to take into account relationships as well as properties. The utility of this method for exploring the variability of various aspects of protein structure and for comparing distantly related proteins is demonstrated by multiple alignment of serine proteinases, aspartic proteinase lobes and globins.
Fast, Accurate RF Propagation Modeling and Simulation Tool for Highly Cluttered Environments
Kuruganti, Phani Teja
2007-01-01
As network centric warfare and distributed operations paradigms unfold, there is a need for robust, fast wireless network deployment tools. These tools must take into consideration the terrain of the operating theater, and facilitate specific modeling of end to end network performance based on accurate RF propagation predictions. It is well known that empirical models can not provide accurate, site specific predictions of radio channel behavior. In this paper an event-driven wave propagation simulation is proposed as a computationally efficient technique for predicting critical propagation characteristics of RF signals in cluttered environments. Convincing validation and simulator performance studies confirm the suitability of this method for indoor and urban area RF channel modeling. By integrating our RF propagation prediction tool, RCSIM, with popular packetlevel network simulators, we are able to construct an end to end network analysis tool for wireless networks operated in built-up urban areas.
General-Aviation Control Loader
NASA Technical Reports Server (NTRS)
Baltrus, Daniel W.; Albang, Leroy F.; Hallinger, John A.; Burge, W. Wayne
1988-01-01
Artificial-feel system designed for general-aviation flight simulators. New system developed to replace directly lateral and longitudinal controls in general-aviation cockpit for use in flight-simulation research. Using peaucellier's cell to convert linear motion to rotary motion, control-loading system provides realistic control-force feedback to cockpit wheel and column controls.
NASA Astrophysics Data System (ADS)
Utteridge, E. J.
A radar environment simulator (RES) is described which combines a high degree of signal realism with flexible real-time control. The RES features interactive simulation of IF and RF, aircraft echo simulation, active jamming (including simultaneous jamming, passive jamming, and simulator control. The general design and principal components of the RES are briefly described, and its detailed performance characteristics are presented.
NASA Astrophysics Data System (ADS)
Pfefferlé, D.; Graves, J. P.; Cooper, W. A.
2015-05-01
To identify under what conditions guiding-centre or full-orbit tracing should be used, an estimation of the spatial variation of the magnetic field is proposed, not only taking into account gradient and curvature terms but also parallel currents and the local shearing of field-lines. The criterion is derived for general three-dimensional magnetic equilibria including stellarator plasmas. Details are provided on how to implement it in cylindrical coordinates and in flux coordinates that rely on the geometric toroidal angle. A means of switching between guiding-centre and full-orbit equations at first order in Larmor radius with minimal discrepancy is shown. Techniques are applied to a MAST (mega amp spherical tokamak) helical core equilibrium in which the inner kinked flux-surfaces are tightly compressed against the outer axisymmetric mantle and where the parallel current peaks at the nearly rational surface. This is put in relation with the simpler situation B(x, y, z) = B0[sin(kx)ey + cos(kx)ez], for which full orbits and lowest order drifts are obtained analytically. In the kinked equilibrium, the full orbits of NBI fast ions are solved numerically and shown to follow helical drift surfaces. This result partially explains the off-axis redistribution of neutral beam injection fast particles in the presence of MAST long-lived modes (LLM).
Molecular Simulation of Nonequilibrium Hypersonic Flows
NASA Astrophysics Data System (ADS)
Schwartzentruber, T. E.; Valentini, P.; Tump, P.
2011-08-01
Large-scale conventional time-driven molecular dynam- ics (MD) simulations of normal shock waves are performed for monatomic argon and argon-helium mixtures. For pure argon, near perfect agreement between MD and direct simulation Monte Carlo (DSMC) results using the variable-hard-sphere model are found for density and temperature profiles as well as for velocity distribution functions throughout the shock. MD simulation results for argon are also in excellent agreement with experimental shock thickness data. Preliminary MD simulation results for argon-helium mixtures are in qualitative agreement with experimental density and temperature profile data, where separation between argon and helium density profiles due to disparate atomic mass is observed. Since conventional time-driven MD simulation of di- lute gases is computationally inefficient, a combined Event-Driven/Time-Driven MD algorithm is presented. The ED/TD-MD algorithm computes impending collisions and advances molecules directly to their next collision while evaluating the collision using conventional time-driven MD with an arbitrary interatomic potential. The method timestep thus approaches the mean-collision- time in the gas, while also detecting and simulating multi- body collisions with a small approximation. Extension of the method to diatomic and small polyatomic molecules is detailed, where center-of-mass velocities and extended cutoff radii are used to advance molecules to impending collisions. Only atomic positions are integrated during collisions and molecule sorting algorithms are employed to determine if atoms are bound in a molecule after a collision event. Rotational relaxation to equilibrium for a low density diatomic gas is validated by comparison with large-scale conventional time-driven MD simulation, where the final rotational distribution function is verified to be the correct Boltzmann rotational energy distribution.
NASA Astrophysics Data System (ADS)
Zube, Nicholas Gerard; Zhang, Xi; Natraj, Vijay
2016-10-01
General circulation models often incorporate simple approximations of heating between vertically inhomogeneous layers rather than more accurate but computationally expensive radiative transfer (RT) methods. With the goal of developing a GCM package that can model both solar system bodies and exoplanets, it is vital to examine up-to-date RT models to optimize speed and accuracy for heat transfer calculations. Here, we examine a variety of interchangeable radiative transfer models in conjunction with MITGCM (Hill and Marshall, 1995). First, for atmospheric opacity calculations, we test gray approximation, line-by-line, and correlated-k methods. In combination with these, we also test RT routines using 2-stream DISORT (discrete ordinates RT), N-stream DISORT (Stamnes et al., 1988), and optimized 2-stream (Spurr and Natraj, 2011). Initial tests are run using Jupiter as an example case. The results can be compared in nine possible configurations for running a complete RT routine within a GCM. Each individual combination of opacity and RT methods is contrasted with the "ground truth" calculation provided by the line-by-line opacity and N-stream DISORT, in terms of computation speed and accuracy of the approximation methods. We also examine the effects on accuracy when performing these calculations at different time step frequencies within MITGCM. Ultimately, we will catalog and present the ideal RT routines that can replace commonly used approximations within a GCM for a significant increase in calculation accuracy, and speed comparable to the dynamical time steps of MITGCM. Future work will involve examining whether calculations in the spatial domain can also be reduced by smearing grid points into larger areas, and what effects this will have on overall accuracy.
Fast spot-based multiscale simulations of granular drainage
Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.
2009-05-22
We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.
Tripathi, Anurag; Khakhar, D V
2010-04-01
We study smooth, slightly inelastic particles flowing under gravity on a bumpy inclined plane using event-driven and discrete-element simulations. Shallow layers (ten particle diameters) are used to enable simulation using the event-driven method within reasonable computational times. Steady flows are obtained in a narrow range of angles (13 degrees-14.5 degrees); lower angles result in stopping of the flow and higher angles in continuous acceleration. The flow is relatively dense with the solid volume fraction, nu approximately 0.5 , and significant layering of particles is observed. We derive expressions for the stress, heat flux, and dissipation for the hard and soft particle models from first principles. The computed mean velocity, temperature, stress, dissipation, and heat flux profiles of hard particles are compared to soft particle results for different values of stiffness constant (k). The value of stiffness constant for which results for hard and soft particles are identical is found to be k>or=2x10(6) mg/d, where m is the mass of a particle, g is the acceleration due to gravity, and d is the particle diameter. We compare the simulation results to constitutive relations obtained from the kinetic theory of Jenkins and Richman [J. T. Jenkins and M. W. Richman, Arch. Ration. Mech. Anal. 87, 355 (1985)] for pressure, dissipation, viscosity, and thermal conductivity. We find that all the quantities are very well predicted by kinetic theory for volume fractions nu<0.5. At higher densities, obtained for thicker layers (H=15d and H=20d), the kinetic theory does not give accurate prediction. Deviations of the kinetic theory predictions from simulation results are relatively small for dissipation and heat flux and most significant deviations are observed for shear viscosity and pressure. The results indicate the range of applicability of soft particle simulations and kinetic theory for dense flows.
Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H
2005-01-01
Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.
Simulation of networks of spiking neurons: A review of tools and strategies
Brette, Romain; Rudolph, Michelle; Carnevale, Ted; Hines, Michael; Beeman, David; Bower, James M.; Diesmann, Markus; Morrison, Abigail; Goodman, Philip H.; Harris, Frederick C.; Zirpe, Milind; Natschläger, Thomas; Pecevski, Dejan; Ermentrout, Bard; Djurfeldt, Mikael; Lansner, Anders; Rochel, Olivier; Vieville, Thierry; Muller, Eilif; Davison, Andrew P.; El Boustani, Sami
2009-01-01
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin–Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks. PMID:17629781
Research through simulation. [simulators and research applications at Langley
NASA Technical Reports Server (NTRS)
Copeland, J. L. (Compiler)
1982-01-01
The design of the computer operating system at Langley Research Center allows for concurrent support of time-critical simulations and background analytical computing on the same machine. Signal path interconnections between computing hardware and flight simulation hardware is provided to allow up to six simulation programs to be in operation at one time. Capabilities and research applications are discussed for the: (1) differential maneuvering simulator; (2) visual motion simulator; (3) terminal configured vehicle simulator; (4) general aviation aircraft simulator; (5) general purpose fixed based simulator; (6) transport simulator; (7) digital fly by wire simulator; (8) general purpose fighter simulator; and (9) the roll-up cockpit. The visual landing display system and graphics display system are described and their simulator support applications are listed.
Generalization of the dynamic clamp concept in neurophysiology and behavior.
Chamorro, Pablo; Muñiz, Carlos; Levi, Rafael; Arroyo, David; Rodríguez, Francisco B; Varona, Pablo
2012-01-01
The idea of closed-loop interaction in in vitro and in vivo electrophysiology has been successfully implemented in the dynamic clamp concept strongly impacting the research of membrane and synaptic properties of neurons. In this paper we show that this concept can be easily generalized to build other kinds of closed-loop protocols beyond (or in addition to) electrical stimulation and recording in neurophysiology and behavioral studies for neuroethology. In particular, we illustrate three different examples of goal-driven real-time closed-loop interactions with drug microinjectors, mechanical devices and video event driven stimulation. Modern activity-dependent stimulation protocols can be used to reveal dynamics (otherwise hidden under traditional stimulation techniques), achieve control of natural and pathological states, induce learning, bridge between disparate levels of analysis and for a further automation of experiments. We argue that closed-loop interaction calls for novel real time analysis, prediction and control tools and a new perspective for designing stimulus-response experiments, which can have a large impact in neuroscience research.
The complete general secretory pathway in gram-negative bacteria.
Pugsley, A P
1993-01-01
The unifying feature of all proteins that are transported out of the cytoplasm of gram-negative bacteria by the general secretory pathway (GSP) is the presence of a long stretch of predominantly hydrophobic amino acids, the signal sequence. The interaction between signal sequence-bearing proteins and the cytoplasmic membrane may be a spontaneous event driven by the electrochemical energy potential across the cytoplasmic membrane, leading to membrane integration. The translocation of large, hydrophilic polypeptide segments to the periplasmic side of this membrane almost always requires at least six different proteins encoded by the sec genes and is dependent on both ATP hydrolysis and the electrochemical energy potential. Signal peptidases process precursors with a single, amino-terminal signal sequence, allowing them to be released into the periplasm, where they may remain or whence they may be inserted into the outer membrane. Selected proteins may also be transported across this membrane for assembly into cell surface appendages or for release into the extracellular medium. Many bacteria secrete a variety of structurally different proteins by a common pathway, referred to here as the main terminal branch of the GSP. This recently discovered branch pathway comprises at least 14 gene products. Other, simpler terminal branches of the GSP are also used by gram-negative bacteria to secrete a more limited range of extracellular proteins. PMID:8096622
Moore, Alison
Many district general hospitals are likely to lose services, and some may close, as a result of the pressure to centralise specialist services, improve patient outcomes and cope with funding cuts. Nurses are increasingly willing to support intelligent reconfiguration if it will improve patient care, but changes in acute services have to be supported by improvements in the community.
ERIC Educational Resources Information Center
Joseph, Dan; Hartman, Gregory; Gibson, Caleb
2011-01-01
In this article we explore the consequences of modifying the common definition of a parabola by considering the locus of all points equidistant from a focus and (not necessarily linear) directrix. The resulting derived curves, which we call "generalized parabolas," are often quite beautiful and possess many interesting properties. We show that…
Thierauf, Anne; Perez, Gerardo; Maloy, And Stanley
2009-01-01
Transduction is the process in which bacterial DNA is transferred from one bacterial cell to another by means of a phage particle. There are two types of transduction, generalized transduction and specialized transduction. In this chapter two of the best-studied systems - Escherichia coli-phage P1, and Salmonella enterica-phage P22 - are discussed from theoretical and practical perspectives.
Vymetal, J
2003-01-01
Nowadays a theoretical psychotherapeutical thinking develops from the eclectic practice and uses particularly the research of the effective factors of the therapy. Best they can be characterized as differentiate, synthetic, integrative and exceeding other approaches. The development in question goes on with attempts of creating a general model of the psychotherapy that could be a basis for models of special psychotherapies. The aim of such a model is to describe all that is present as important factor for inducing a desirable change of a human in all psychotherapeutical approaches. Among general models we can mention the generic model of D. E. Orlinski and K. I. Howard, Grawe's cube (the author is K. Grawe) and the equation of the psychotherapy.
McCullough, Louise D.
2015-01-01
Movement disorders have been reported as rare complications of stroke. The basal ganglia have been implicated in the pathophysiology of most post-stroke dyskinesias. We outline different types of post-stroke myoclonus and their possible pathophysiology. A middle-aged man developed generalized myoclonus after an ischemic stroke in the superior midbrain and subthalamic nuclei. Spontaneous resolution was seen by 72 hours. A lesion to the subthalamic nuclei disrupted the normal thalamic inhibition, which likely led to the involuntary movements seen in our patient. In this case, myoclonus was generalized, which, to the best of our knowledge, has not been reported in the literature as a direct consequence of focal stroke. PMID:25553226
Church, R M; Gibbon, J
1982-04-01
Responses of 26 rats were reinforced following a signal of a certain duration, but not following signals of shorter or longer durations. This led to a positive temporal generalization gradient with a maximum at the reinforced duration in six experiments. Spacing of the nonreinforced signals did not influence the gradient, but the location of the maximum and breadth of the gradient increased with the duration of the reinforced signal. Reduction of reinforcement, either by partial reinforcement or reduction in the probability of a positive signal, led to a decrease in the height of the generalization gradient. There were large, reliable individual differences in the height and breadth of the generalization gradient. When the conditions of reinforcement were reversed (responses reinforced following all signals longer or shorter than a single nonreinforced duration), eight additional rats had a negative generalization gradient with a minimum at a signal duration shorter than the single nonreinforced duration. A scalar timing theory is described that provided a quantitative fit of the data. This theory involved a clock that times in linear units with an accurate mean and a negligible variance, a distribution of memory times that is normally distributed with an accurate mean and a scalar standard deviation, and a rule to respond if the clock is "close enough" to a sample of the memory time distribution. This decision is based on a ratio of the discrepancy between the clock time and the remembered time, to the remembered time. When this ratio is below a (variable) threshold, subjects respond. When three timing parameters--coefficient of variation of the memory time, the mean and the standard deviation of the threshold--were set at their median values, a theory with two free parameters accounted for 96% of the variance. The two parameters reflect the probability of attention to time and the probability of a response given inattention. These parameters were not influenced
Simulation of Physical Experiments in Immersive Virtual Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Wasfy, Tamer M.
2001-01-01
An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.
Smil, V.
1991-01-01
This book is a comprehensive sourcebook for planetary management and strategies for sustainable development. Coupling biospheric and civilizational aspects, the book features thorough treatments of all critical energy storages, flows, and conversions. Measurements of energy and power densities and intensities are used throughout the book to provide an integrated framework of analysis for all segments of energetics from planetary and bioenergetics to the human energetics of hunting-gathering and agricultural societies through modern industrial civilization. Coverage also examines the environmental and socio-economic implication of the general patterns and trends of modern energy use.
Multipebble Simulations for Alternating Automata
NASA Astrophysics Data System (ADS)
Clemente, Lorenzo; Mayr, Richard
We study generalized simulation relations for alternating Büchi automata (ABA), as well as alternating finite automata. Having multiple pebbles allows the Duplicator to "hedge her bets" and delay decisions in the simulation game, thus yielding a coarser simulation relation. We define (k 1,k 2)-simulations, with k 1/k 2 pebbles on the left/right, respectively. This generalizes previous work on ordinary simulation (i.e., (1,1)-simulation) for nondeterministic Büchi automata (NBA)[4] in and ABA in [5], and (1,k)-simulation for NBA in [3].
The simulation of communication protocols
NASA Astrophysics Data System (ADS)
de Carvalho Viola, F. E.
1985-12-01
Simulators for communication protocols specified in the ESTELLE and LC/1 languages are developed. The general principles of protocol simulation are reviewed; ESTELLE and LC/1 are characterized; a prototype LC/1 simulator based on predicate Petri nets is described; and a detailed specification for a driving interface for an ESTELLE simulator is given.
Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2008-01-01
Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.
NASA Technical Reports Server (NTRS)
Bernstrom, G. G.
1986-01-01
Program for communications and control computer simulates pulse-codemodulated data. Software for simulation pulse-code-modulated (PCM) data from Space Shuttle during launch preparations developed for use with checkout, control, and monitor subsystem (CCMS). Facilitates testing of CCMS with data expected from main engines, external fuel tanks, operational instrumentation, general-purpose computer, backup flight system, and payload. Simulator program executes in standard CCMS hardware, requiring no new hardware.
Proceedings of the 1987 winter simulation conference
Thesen, A.; Grant, H.; Kelton, W.D.
1988-01-01
This book contains papers presented at a conference on Computerized Simulation. Topics include the following: Model generating input processes; generalized zero invariance and intelligent random numbers; computerized simulation of array processors; and computerized simulation for traffic control.
NASA Astrophysics Data System (ADS)
Anthony, Seth
Part I. Students' participation in inquiry-based chemistry laboratory curricula, and, in particular, engagement with key thinking processes in conjunction with these experiences, is linked with success at the difficult task of "transfer"---applying their knowledge in new contexts to solve unfamiliar types of problems. We investigate factors related to classroom experiences, student metacognition, and instructor feedback that may affect students' engagement in key aspects of the Model-Observe-Reflect-Explain (MORE) laboratory curriculum - production of written molecular-level models of chemical systems, describing changes to those models, and supporting those changes with reference to experimental evidence---and related behaviors. Participation in introductory activities that emphasize reviewing and critiquing of sample models and peers' models are associated with improvement in several of these key aspects. When students' self-assessments of the quality of aspects of their models are solicited, students are generally overconfident in the quality of their models, but these self-ratings are also sensitive to the strictness of grades assigned by their instructor. Furthermore, students who produce higher-quality models are also more accurate in their self-assessments, suggesting the importance of self-evaluation as part of the model-writing process. While the written feedback delivered by instructors did not have significant impacts on student model quality or self-assessments, students' resubmissions of models were significantly improved when students received "reflective" feedback prompting them to self-evaluate the quality of their models. Analysis of several case studies indicates that the content and extent of molecular-level ideas expressed in students' models are linked with the depth of discussion and content of discussion that occurred during the laboratory period, with ideas developed or personally committed to by students during the laboratory period being
NASA Astrophysics Data System (ADS)
Cheng, Yen-Ming
Simulation of the dynamics of physical systems is an important aspect of the engineering discipline for approximating the dynamics of real life. The simulation of complex multibody systems to an acceptable degree of accuracy involves the mathematical modeling and computer implementation of systems such as mechanisms and vehicles comprised of multiple parts. In this dissertation, new algorithms are developed for multibody simulation using a rather general mathematical model. Both open-tree and closed-loop topologies are implemented. Constraints, specifically, joint constraints, are investigated. A new algorithm is developed that projects the original configuration space into the unconstrained orthogonal subspace, thereby reducing the dimension of the system equations without resorting to complicated transformations. The reduced set of equations not only increases the simulation speed, but also improves the numerical accuracy of the simulation results by reducing the number of calculations performed. Constraint forces can easily be obtained if required for analyzing the multibody system. Algorithms by themselves are not immediately useful to users. A program was developed to implement the algorithms. The program, which was written in C/C++, incorporated the use of Microsoft Windows Application Programming Interfaces (Windows API), Microsoft Foundation Classes (MFC), and OpenGL graphics language. The system states are integrated by applying standard numerical techniques for integrating a set of first-order differential equations. Accelerations and constraint forces are obtained using direct and/or iterative techniques for solving a set of simultaneous equations. With today's powerful computers, a graphical interface becomes feasible to serve as the communicator between the program and the user. The software therefore includes a graphical user interface. Concurrent graphical animations of the motion of the system simulated are created. These are important to the user
Functional Generalized Additive Models.
McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David
2014-01-01
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.
Velleux, Mark L; Julien, Pierre Y; Rojas-Sanchez, Rosalia; Clements, William H; England, John F
2006-11-15
The transport and toxicity of metals at the California Gulch, Colorado mine-impacted watershed were simulated with a spatially distributed watershed model. Using a database of observations for the period 1984-2004, hydrology, sediment transport, and metals transport were simulated for a June 2003 calibration event and a September 2003 validation event. Simulated flow volumes were within approximately 10% of observed conditions. Observed ranges of total suspended solids, cadmium, copper, and zinc concentrations were also successfully simulated. The model was then used to simulate the potential impacts of a 1-in-100-year rainfall event. Driven by large flows and corresponding soil and sediment erosion for the 1-in-100-year event, estimated solids and metals export from the watershed is 10,000 metric tons for solids, 215 kg for Cu, 520 kg for Cu, and 15,300 kg for Zn. As expressed by the cumulative criterion unit (CCU) index, metals concentrations far exceed toxic effects thresholds, suggesting a high probability of toxic effects downstream of the gulch. More detailed Zn source analyses suggest that much of the Zn exported from the gulch originates from slag piles adjacent to the lower gulch floodplain and an old mining site located near the head of the lower gulch. PMID:17154007
Accelerated Molecular Dynamics Simulation of Hypersonic Flow Features in Dilute Gases
NASA Astrophysics Data System (ADS)
Schwartzentruber, Thomas; Valentini, Paolo
2009-11-01
Accurate simulation of high-altitude hypersonic flows requires advanced physical models capable of predicting the transfer of energy between translational, rotational, vibrational, and chemical modes of a gas in strong thermochemical non-equilibrium. A combined Event-Driven / Time-Driven (ED/TD) Molecular Dynamics (MD) algorithm is presented that greatly accelerates the MD simulation of dilute gases. The goal of this research is to utilize advances in computational chemistry to study thermochemical non-equilibrium processes in hypersonic flows. The ED/TD MD method identifies impending collisions (including multi-body collisions) and advances molecules directly to their next interaction, however, then integrates each interaction accurately using an arbitrary interatomic potential via conventional MD with small timesteps. First, the ED/TD MD algorithm and efficiency will be detailed. Next, ED/TD MD simulations of normal shock waves in dilute argon will be validated with experiment and direct simulation Monte Carlo simulations employing the variable-hard-sphere collision model. Profiling of the code reveals that the relative computational time required for the MD integration of collisions is extremely low and the potential for incorporating advanced classical and first-principles interatomic potentials within the ED/TD MD method will be discussed.
General Relativity and Gravitation
NASA Astrophysics Data System (ADS)
Ashtekar, Abhay; Berger, Beverly; Isenberg, James; MacCallum, Malcolm
2015-07-01
Part I. Einstein's Triumph: 1. 100 years of general relativity George F. R. Ellis; 2. Was Einstein right? Clifford M. Will; 3. Cosmology David Wands, Misao Sasaki, Eiichiro Komatsu, Roy Maartens and Malcolm A. H. MacCallum; 4. Relativistic astrophysics Peter Schneider, Ramesh Narayan, Jeffrey E. McClintock, Peter Mészáros and Martin J. Rees; Part II. New Window on the Universe: 5. Receiving gravitational waves Beverly K. Berger, Karsten Danzmann, Gabriela Gonzalez, Andrea Lommen, Guido Mueller, Albrecht Rüdiger and William Joseph Weber; 6. Sources of gravitational waves. Theory and observations Alessandra Buonanno and B. S. Sathyaprakash; Part III. Gravity is Geometry, After All: 7. Probing strong field gravity through numerical simulations Frans Pretorius, Matthew W. Choptuik and Luis Lehner; 8. The initial value problem of general relativity and its implications Gregory J. Galloway, Pengzi Miao and Richard Schoen; 9. Global behavior of solutions to Einstein's equations Stefanos Aretakis, James Isenberg, Vincent Moncrief and Igor Rodnianski; Part IV. Beyond Einstein: 10. Quantum fields in curved space-times Stefan Hollands and Robert M. Wald; 11. From general relativity to quantum gravity Abhay Ashtekar, Martin Reuter and Carlo Rovelli; 12. Quantum gravity via unification Henriette Elvang and Gary T. Horowitz.
NASA Astrophysics Data System (ADS)
Fragile, P. Chris
2014-09-01
As the title suggests, the purpose of this chapter is to review the current status of numerical simulations of black hole accretion disks. This chapter focuses exclusively on global simulations of the accretion process within a few tens of gravitational radii of the black hole. Most of the simulations discussed are performed using general relativistic magnetohydrodynamic (MHD) schemes, although some mention is made of Newtonian radiation MHD simulations and smoothed particle hydrodynamics. The goal is to convey some of the exciting work that has been going on in the past few years and provide some speculation on future directions.
Simulation Framework for Teaching in Modeling and Simulation Areas
ERIC Educational Resources Information Center
De Giusti, Marisa Raquel; Lira, Ariel Jorge; Villarreal, Gonzalo Lujan
2008-01-01
Simulation is the process of executing a model that describes a system with enough detail; this model has its entities, an internal state, some input and output variables and a list of processes bound to these variables. Teaching a simulation language such as general purpose simulation system (GPSS) is always a challenge, because of the way it…
Event-Driven Messaging for Offline Data Quality Monitoring at ATLAS
NASA Astrophysics Data System (ADS)
Onyisi, Peter
2015-12-01
During LHC Run 1, the information flow through the offline data quality monitoring in ATLAS relied heavily on chains of processes polling each other's outputs for handshaking purposes. This resulted in a fragile architecture with many possible points of failure and an inability to monitor the overall state of the distributed system. We report on the status of a project undertaken during the LHC shutdown to replace the ad hoc synchronization methods with a uniform message queue system. This enables the use of standard protocols to connect processes on multiple hosts; reliable transmission of messages between possibly unreliable programs; easy monitoring of the information flow; and the removal of inefficient polling-based communication.
Heinrich events driven by feedback between ocean forcing and glacial isostatic adjustment
NASA Astrophysics Data System (ADS)
Bassis, J. N.; Petersen, S. V.; Cathles, L. M. M., IV
2015-12-01
One of the most puzzling glaciological features of the past ice age is the episodic discharge of large volumes of icebergs from the Laurentide Ice Sheet, known as Heinrich events. It has been suggested that Heinrich events are caused by internal instabilities in the ice sheet (e.g. the binge-purge oscillation). A purely ice dynamic cycle, however, is at odds with the fact that every Heinrich event occurs during the cold phase of a DO cycle, implying some regional climate connection. Recent work has pointed to subsurface water warming as a trigger for Heinrich events through increased basal melting of an ice shelf extending across the Hudson Strait and connecting with the Greenland Ice Sheet. Such a large ice shelf, spanning the deepest part of the Labrador Sea, has no modern analog and limited proxy evidence. Here we use a width averaged "flowline" model of the Hudson Strait ice stream to show that Heinrich events can be triggered by ocean forcing of a grounded terminus without the need for an ice shelf. At maximum ice extent, bed topography is depressed and the terminus is more sensitive to a subsurface thermal forcing. Once triggered, the retreat is rapid, and continues until isostatic rebound of the bed causes local sea level to drop sufficiently to arrest retreat. Topography slowly rebounds, decreasing the sensitivity to ocean forcing and the ice stream re-advances at a rate that is an order of magnitude slower than collapse. This simple feedback cycle between a short-lived ocean trigger and slower isostatic adjustment can reproduce the periodicity and timing of observed Heinrich events under a range of glaciological and solid earth parameters. Our results suggest that not only does the solid Earth play an important role in regulating ice sheet stability, but that grounded marine terminating portions of ice sheets may be more sensitive to ocean forcing than previously thought.
An event driven hybrid identity management approach to privacy enhanced e-health.
Sánchez-Guerrero, Rosa; Almenárez, Florina; Díaz-Sánchez, Daniel; Marín, Andrés; Arias, Patricia; Sanvido, Fabio
2012-01-01
Credential-based authorization offers interesting advantages for ubiquitous scenarios involving limited devices such as sensors and personal mobile equipment: the verification can be done locally; it offers a more reduced computational cost than its competitors for issuing, storing, and verification; and it naturally supports rights delegation. The main drawback is the revocation of rights. Revocation requires handling potentially large revocation lists, or using protocols to check the revocation status, bringing extra communication costs not acceptable for sensors and other limited devices. Moreover, the effective revocation consent--considered as a privacy rule in sensitive scenarios--has not been fully addressed. This paper proposes an event-based mechanism empowering a new concept, the sleepyhead credentials, which allows to substitute time constraints and explicit revocation by activating and deactivating authorization rights according to events. Our approach is to integrate this concept in IdM systems in a hybrid model supporting delegation, which can be an interesting alternative for scenarios where revocation of consent and user privacy are critical. The delegation includes a SAML compliant protocol, which we have validated through a proof-of-concept implementation. This article also explains the mathematical model describing the event-based model and offers estimations of the overhead introduced by the system. The paper focus on health care scenarios, where we show the flexibility of the proposed event-based user consent revocation mechanism.
What can neuromorphic event-driven precise timing add to spike-based pattern recognition?
Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad
2015-03-01
This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.
Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon
2014-01-01
Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe
2014-01-01
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694
An Event Driven Hybrid Identity Management Approach to Privacy Enhanced e-Health
Sánchez-Guerrero, Rosa; Almenárez, Florina; Díaz-Sánchez, Daniel; Marín, Andrés; Arias, Patricia; Sanvido, Fabio
2012-01-01
Credential-based authorization offers interesting advantages for ubiquitous scenarios involving limited devices such as sensors and personal mobile equipment: the verification can be done locally; it offers a more reduced computational cost than its competitors for issuing, storing, and verification; and it naturally supports rights delegation. The main drawback is the revocation of rights. Revocation requires handling potentially large revocation lists, or using protocols to check the revocation status, bringing extra communication costs not acceptable for sensors and other limited devices. Moreover, the effective revocation consent—considered as a privacy rule in sensitive scenarios—has not been fully addressed. This paper proposes an event-based mechanism empowering a new concept, the sleepyhead credentials, which allows to substitute time constraints and explicit revocation by activating and deactivating authorization rights according to events. Our approach is to integrate this concept in IdM systems in a hybrid model supporting delegation, which can be an interesting alternative for scenarios where revocation of consent and user privacy are critical. The delegation includes a SAML compliant protocol, which we have validated through a proof-of-concept implementation. This article also explains the mathematical model describing the event-based model and offers estimations of the overhead introduced by the system. The paper focus on health care scenarios, where we show the flexibility of the proposed event-based user consent revocation mechanism. PMID:22778634
Sequence-of-events-driven automation of the deep space network
NASA Technical Reports Server (NTRS)
Hill, R., Jr.; Fayyad, K.; Smyth, C.; Santos, T.; Chen, R.; Chien, S.; Bevan, R.
1996-01-01
In February 1995, sequence-of-events (SOE)-driven automation technology was demonstrated for a Voyager telemetry downlink track at DSS 13. This demonstration entailed automated generation of an operations procedure (in the form of a temporal dependency network) from project SOE information using artificial intelligence planning technology and automated execution of the temporal dependency network using the link monitor and control operator assistant system. This article describes the overall approach to SOE-driven automation that was demonstrated, identifies gaps in SOE definitions and project profiles that hamper automation, and provides detailed measurements of the knowledge engineering effort required for automation.
Piezoelectric MEMS switch to activate event-driven wireless sensor nodes
NASA Astrophysics Data System (ADS)
Nogami, H.; Kobayashi, T.; Okada, H.; Makimoto, N.; Maeda, R.; Itoh, T.
2013-09-01
We have developed piezoelectric microelectromechanical systems (MEMS) switches and applied them to ultra-low power wireless sensor nodes, to monitor the health condition of chickens. The piezoelectric switches have ‘S’-shaped piezoelectric cantilevers with a proof mass. Since the resonant frequency of the piezoelectric switches is around 24 Hz, we have utilized their superharmonic resonance to detect chicken movements as low as 5-15 Hz. When the vibration frequency is 4, 6 and 12 Hz, the piezoelectric switches vibrate at 0.5 m s-2 and generate 3-5 mV output voltages with superharmonic resonance. In order to detect such small piezoelectric output voltages, we employ comparator circuits that can be driven at low voltages, which can set the threshold voltage (Vth) from 1 to 31 mV with a 1 mV increment. When we set Vth at 4 mV, the output voltages of the piezoelectric MEMS switches vibrate below 15 Hz with amplitudes above 0.3 m s-2 and turn on the comparator circuits. Similarly, by setting Vth at 5 mV, the output voltages turn on the comparator circuits with vibrations above 0.4 m s-2. Furthermore, setting Vth at 10 mV causes vibrations above 0.5 m s-2 that turn on the comparator circuits. These results suggest that we can select small or fast chicken movements to utilize piezoelectric MEMS switches with comparator circuits.
NASA Astrophysics Data System (ADS)
Okada, Hironao; Kobayashi, Takeshi; Masuda, Takashi; Itoh, Toshihiro
2009-07-01
We describe a low power consumption wireless sensor node designed for monitoring the conditions of animals, especially of chickens. The node detects variations in 24-h behavior patterns by acquiring the number of the movement of an animal whose acceleration exceeds a threshold measured in per unit time. Wireless sensor nodes when operated intermittently are likely to miss necessary data during their sleep mode state and waste the power in the case of acquiring useless data. We design the node worked only when required acceleration is detected using a piezoelectric accelerometer and a comparator for wake-up source of micro controller unit.
Eocene global warming events driven by ventilation of oceanic dissolved organic carbon.
Sexton, Philip F; Norris, Richard D; Wilson, Paul A; Pälike, Heiko; Westerhold, Thomas; Röhl, Ursula; Bolton, Clara T; Gibbs, Samantha
2011-03-17
'Hyperthermals' are intervals of rapid, pronounced global warming known from six episodes within the Palaeocene and Eocene epochs (∼65-34 million years (Myr) ago). The most extreme hyperthermal was the ∼170 thousand year (kyr) interval of 5-7 °C global warming during the Palaeocene-Eocene Thermal Maximum (PETM, 56 Myr ago). The PETM is widely attributed to massive release of greenhouse gases from buried sedimentary carbon reservoirs, and other, comparatively modest, hyperthermals have also been linked to the release of sedimentary carbon. Here we show, using new 2.4-Myr-long Eocene deep ocean records, that the comparatively modest hyperthermals are much more numerous than previously documented, paced by the eccentricity of Earth's orbit and have shorter durations (∼40 kyr) and more rapid recovery phases than the PETM. These findings point to the operation of fundamentally different forcing and feedback mechanisms than for the PETM, involving redistribution of carbon among Earth's readily exchangeable surface reservoirs rather than carbon exhumation from, and subsequent burial back into, the sedimentary reservoir. Specifically, we interpret our records to indicate repeated, large-scale releases of dissolved organic carbon (at least 1,600 gigatonnes) from the ocean by ventilation (strengthened oxidation) of the ocean interior. The rapid recovery of the carbon cycle following each Eocene hyperthermal strongly suggests that carbon was re-sequestered by the ocean, rather than the much slower process of silicate rock weathering proposed for the PETM. Our findings suggest that these pronounced climate warming events were driven not by repeated releases of carbon from buried sedimentary sources, but, rather, by patterns of surficial carbon redistribution familiar from younger intervals of Earth history.
Event-driven sediment flux in Hueneme and Mugu submarine canyons, southern California
Xu, J. P.; Swarzenski, P.W.; Noble, M.; Li, A.-C.
2010-01-01
Vertical sediment fluxes and their dominant controlling processes in Hueneme and Mugu submarine canyons off south-central California were assessed using data from sediment traps and current meters on two moorings that were deployed for 6 months during the winter of 2007. The maxima of total particulate flux, which reached as high as 300+ g/m2/day in Hueneme Canyon, were recorded during winter storm events when high waves and river floods often coincided. During these winter storms, wave-induced resuspension of shelf sediment was a major source for the elevated sediment fluxes. Canyon rim morphology, rather than physical proximity to an adjacent river mouth, appeared to control the magnitude of sediment fluxes in these two submarine canyon systems. Episodic turbidity currents and internal bores enhanced sediment fluxes, particularly in the lower sediment traps positioned 30 m above the canyon floor. Lower excess 210Pb activities measured in the sediment samples collected during periods of peak total particulate flux further substantiate that reworked shelf-, rather than newly introduced river-borne, sediments supply most of the material entering these canyons during storms.
Relativistic electron precipitation events driven by electromagnetic ion-cyclotron waves
Khazanov, G. Sibeck, D.; Tel'nikhin, A.; Kronberg, T.
2014-08-15
We adopt a canonical approach to describe the stochastic motion of relativistic belt electrons and their scattering into the loss cone by nonlinear EMIC waves. The estimated rate of scattering is sufficient to account for the rate and intensity of bursty electron precipitation. This interaction is shown to result in particle scattering into the loss cone, forming ∼10 s microbursts of precipitating electrons. These dynamics can account for the statistical correlations between processes of energization, pitch angle scattering, and relativistic electron precipitation events, that are manifested on large temporal scales of the order of the diffusion time ∼tens of minutes.
Sequence-of-Events-Driven Automation of the Deep Space Network
NASA Astrophysics Data System (ADS)
Hill, R., Jr.; Fayyad, K.; Smyth, C.; Santos, T.; Chen, R.; Chien, S.; Bevan, R.
1995-10-01
In February 1995, sequence-of-events (SOE)-driven automation technology was demonstrated for a Voyager telemetry downlink track at DSS 13. This demonstration entailed automated generation of an operations procedure (in the form of a temporal dependency network) from project SOE information using artificial intelligence planning technology and automated execution of the temporal dependency network using the link monitor and control operator assistant system. This article describes the overall approach to SOE-driven automation that was demonstrated, identifies gaps in SOE definitions and project profiles that hamper automation, and provides detailed measurements of the knowledge engineering effort required for automation.
Taylor, M.J.; Bailey, M.A.; Pautet, P.D.; Cummer, S.A.; Jaugey, N.; Thomas, J.N.; Solorzano, N.N.; Sao, Sabbas F.; Holzworth, R.H.; Pinto, O.; Schuch, N.J.
2008-01-01
As part of a collaborative campaign to investigate Transient Lummous Events (TLEs) over South America, coordinated optical, ELF/VLF, and lightning measurements were made of a mesoscale thunderstorm observed on February 22-23, 2006 over northern Argentina that produced 445 TLEs within a ???6 hour period. Here, we report comprehensive measurements of one of these events, a sprite with halo that was unambiguously associated with a large negative cloud-to-ground (CG) lightning discharge with an impulsive vertical charge moment change (??MQv) of -503 C.km. This event was similar in its location, morphology and duration to other positive TLEs observed from this storm. However, the downward extent of the negative streamers was limited to 25 km, and their apparent brightness was lower than that of a comparable positive event. Observations of negative CG events are rare, and these measurements provide fin-ther evidence that sprites can be driven by upward as well as downward electric fields, as predicted by the conventional breakdown mechanism. Copyright 2008 by the American Geophysical Union.
What can neuromorphic event-driven precise timing add to spike-based pattern recognition?
Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad
2015-03-01
This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations. PMID:25602775
Circuit simulation: some humbling thoughts
Wendt, Manfred; /Fermilab
2006-01-01
A short, very personal note on circuit simulation is presented. It does neither include theoretical background on circuit simulation, nor offers an overview of available software, but just gives some general remarks for a discussion on circuit simulator needs in context to the design and development of accelerator beam instrumentation circuits and systems.
New Directions in Maintenance Simulation.
ERIC Educational Resources Information Center
Miller, Gary G.
A two-phase effort was conducted to design and evaluate a maintenance simulator which incorporated state-of-the-art information in simulation and instructional technology. The particular equipment selected to be simulated was the 6883 Convert/Flight Controls Test Station. Phase I included a generalized block diagram of the computer-trainer, the…
Epistemology of knowledge based simulation
Reddy, R.
1987-04-01
Combining artificial intelligence concepts, with traditional simulation methodologies yields a powerful design support tool known as knowledge based simulation. This approach turns a descriptive simulation tool into a prescriptive tool, one which recommends specific goals. Much work in the area of general goal processing and explanation of recommendations remains to be done.
Simulator sickness in an army simulator.
Braithwaite, M G; Braithwaite, B D
1990-01-01
Simulator sickness describes a symptom reported by aircrew during or after flight simulator training. Some features are common to motion sickness but others, which are unusual during real flight, are believed to result specifically from the simulator environment. This paper describes the results of a questionnaire study examining the incidence and factors influencing simulator sickness in any army training system. Case histories are described and conclusions drawn with respect to health and safety, training and the effect on flight operations. One hundred and fifteen aircrew were registered in the questionnaire study. Data were collected from a history questionnaire, a post-sortie report and a delayed report form. Sixty-nine per cent of aircrew gave a history of symptoms in the simulator and 59.9 per cent experienced at least one symptom during the study period although few symptoms were rated as being other than slight. Only 3.6 per cent of subjects reported symptoms of disequilibrium. Comparative analysis of the results was performed after scoring symptoms to produce a sickness rating. This showed: association between simulator-induced sickness and greater flying experience; adaptation to the simulator environment; a history of sea sickness may predict susceptibility to simulator sickness; and no association of crew role and simulator sickness. Although some authorities believe simulator sickness to be a potential flight safety hazard there was little evidence from this study. Guidelines for the prevention of the problem are presented now that many factors have been identified. A general policy to 'ground' aircrew for a period following simulator training is not necessary, but severe cases should be assessed individually.
GENERALIZED DOUBLE PARETO SHRINKAGE.
Armagan, Artin; Dunson, David B; Lee, Jaeyong
2013-01-01
We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferences in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, forming a bridge between the Laplace and Normal-Jeffreys' priors. While it has a spike at zero like the Laplace density, it also has a Student's t-like tail behavior. Bayesian computation is straightforward via a simple Gibbs sampling algorithm. We investigate the properties of the maximum a posteriori estimator, as sparse estimation plays an important role in many problems, reveal connections with some well-established regularization procedures, and show some asymptotic results. The performance of the prior is tested through simulations and an application.
Simulating granular materials by energy minimization
NASA Astrophysics Data System (ADS)
Krijgsman, D.; Luding, S.
2016-03-01
Discrete element methods are extremely helpful in understanding the complex behaviors of granular media, as they give valuable insight into all internal variables of the system. In this paper, a novel discrete element method for performing simulations of granular media is presented, based on the minimization of the potential energy in the system. Contrary to most discrete element methods (i.e., soft-particle method, event-driven method, and non-smooth contact dynamics), the system does not evolve by (approximately) integrating Newtons equations of motion in time, but rather by searching for mechanical equilibrium solutions for the positions of all particles in the system, which is mathematically equivalent to locally minimizing the potential energy. The new method allows for the rapid creation of jammed initial conditions (to be used for further studies) and for the simulation of quasi-static deformation problems. The major advantage of the new method is that it allows for truly static deformations. The system does not evolve with time, but rather with the externally applied strain or load, so that there is no kinetic energy in the system, in contrast to other quasi-static methods. The performance of the algorithm for both types of applications of the method is tested. Therefore we look at the required number of iterations, for the system to converge to a stable solution. For each single iteration, the required computational effort scales linearly with the number of particles. During the process of creating initial conditions, the required number of iterations for two-dimensional systems scales with the square root of the number of particles in the system. The required number of iterations increases for systems closer to the jamming packing fraction. For a quasi-static pure shear deformation simulation, the results of the new method are validated by regular soft-particle dynamics simulations. The energy minimization algorithm is able to capture the evolution of the
Simulating granular materials by energy minimization
NASA Astrophysics Data System (ADS)
Krijgsman, D.; Luding, S.
2016-11-01
Discrete element methods are extremely helpful in understanding the complex behaviors of granular media, as they give valuable insight into all internal variables of the system. In this paper, a novel discrete element method for performing simulations of granular media is presented, based on the minimization of the potential energy in the system. Contrary to most discrete element methods (i.e., soft-particle method, event-driven method, and non-smooth contact dynamics), the system does not evolve by (approximately) integrating Newtons equations of motion in time, but rather by searching for mechanical equilibrium solutions for the positions of all particles in the system, which is mathematically equivalent to locally minimizing the potential energy. The new method allows for the rapid creation of jammed initial conditions (to be used for further studies) and for the simulation of quasi-static deformation problems. The major advantage of the new method is that it allows for truly static deformations. The system does not evolve with time, but rather with the externally applied strain or load, so that there is no kinetic energy in the system, in contrast to other quasi-static methods. The performance of the algorithm for both types of applications of the method is tested. Therefore we look at the required number of iterations, for the system to converge to a stable solution. For each single iteration, the required computational effort scales linearly with the number of particles. During the process of creating initial conditions, the required number of iterations for two-dimensional systems scales with the square root of the number of particles in the system. The required number of iterations increases for systems closer to the jamming packing fraction. For a quasi-static pure shear deformation simulation, the results of the new method are validated by regular soft-particle dynamics simulations. The energy minimization algorithm is able to capture the evolution of the
49 CFR 572.151 - General description.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., Alpha Version § 572.151 General description. (a) The 12-month-old-infant crash test dummy is described... no contact between metallic elements throughout the range of motion or under simulated crash...
49 CFR 572.151 - General description.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., Alpha Version § 572.151 General description. (a) The 12-month-old-infant crash test dummy is described... no contact between metallic elements throughout the range of motion or under simulated crash...
49 CFR 572.151 - General description.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., Alpha Version § 572.151 General description. (a) The 12-month-old-infant crash test dummy is described... no contact between metallic elements throughout the range of motion or under simulated crash...
49 CFR 572.151 - General description.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., Alpha Version § 572.151 General description. (a) The 12-month-old-infant crash test dummy is described... no contact between metallic elements throughout the range of motion or under simulated crash...
A New Multiscale Technique for Time-Accurate Geophysics Simulations
NASA Astrophysics Data System (ADS)
Omelchenko, Y. A.; Karimabadi, H.
2006-12-01
Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.
ERIC Educational Resources Information Center
Stebbins, Robert C.; Allen, Brockenbrough
1975-01-01
Described are simulations that can be used to illustrate evolution by natural selection. Suggestions for simulating phenomena such as adaptive radiation, color match to background and vision of predators are offered. (BR)
Computer Modeling and Simulation
Pronskikh, V. S.
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes
Bolometer Simulation Using SPICE
NASA Technical Reports Server (NTRS)
Jones, Hollis H.; Aslam, Shahid; Lakew, Brook
2004-01-01
A general model is presented that assimilates the thermal and electrical properties of the bolometer - this block model demonstrates the Electro-Thermal Feedback (ETF) effect on the bolometers performance. This methodology is used to construct a SPICE model that by way of analogy combines the thermal and electrical phenomena into one simulation session. The resulting circuit diagram is presented and discussed.
NASA Astrophysics Data System (ADS)
Polomarov, Oleg; Theodosiou, Constantine; Kaganovich, Igor
2003-10-01
A self-consistent system of equations for the kinetic description of non-local, non-uniform, nearly collisionless plasmas of low-pressure discharges is presented. The system consists of a non-local conductivity operator, and a kinetic equation for the electron distribution function (EEDF) averaged over fast electron bounce motions. A Fast Fourier Transform (FFT) method was applied to speed up the numerical simulations. The importance of accounting for the non-uniform plasma density profile in computing the current density profile and the EEDF is demonstrated. Effects of plasma non-uniformity on electron heating in rf electric field have also been studied. An enhancement of the electron heating due to the bounce resonance between the electron bounce motion and the rf electric field has been observed. Additional information on the subject is posted in http://www.pppl.gov/pub_report/2003/PPPL-3814-abs.html and in http://arxiv.org/abs/physics/0211009
Advanced layout parameter extraction and detailed timing simulation of GaAs gate arrays in MagiCAD
NASA Astrophysics Data System (ADS)
Buchs, Kevin J.; Rowlands, David O.; Prentice, Jeffrey A.; Gilbert, Barry K.
1990-10-01
This paper discusses the features and function of three specific computer aided design tools contained in the Mayo Graphical Integrated Computer Aided Design (MagiCAD) system a complete electronic CAD software package optimized for the design and layout of semicustom (i. e. gate array) Gallium Arsenide (GaAs) integrated circuits. The first design tool the Layout Extractor processes data from placed and routed gate arrays. The Extractor verifies that the layout represents the original logic design and calculates the parasitic capacitance of the individual wiring segments in the logic nets after they have been routed. The capacitance information as calculated by the Layout Extractor is significant in GaAs work since the delay in signals traveling through the routing is often much greater than the delay of the signals traveling through the gates themselves. Once the capacitance data has been processed by the Layout Extractor it becomes available to the second CAD tool discussed here the MagiCAD timing simulation program Sting. Sting a digital event-driven simulator depends on user generation of C language-like behavioral models for all root nodes to be simulated. Through the use of delays calculated by the Extractor from the actual routing and input pin capacitances Sting assures that the entire chip design will operate correctly at the intended clock rate. The third design tool is a set of programs allowing simulation of the electromagnetic behavior of integrated circuit packages circuit
Vertical motion simulator familiarization guide
NASA Technical Reports Server (NTRS)
Danek, George L.
1993-01-01
The Vertical Motion Simulator Familiarization Guide provides a synoptic description of the Vertical Motion Simulator (VMS) and descriptions of the various simulation components and systems. The intended audience is the community of scientists and engineers who employ the VMS for research and development. The concept of a research simulator system is introduced and the building block nature of the VMS is emphasized. Individual sections describe all the hardware elements in terms of general properties and capabilities. Also included are an example of a typical VMS simulation which graphically illustrates the composition of the system and shows the signal flow among the elements and a glossary of specialized terms, abbreviations, and acronyms.