Science.gov

Sample records for general event-driven simulator

  1. Event-driven simulation in SELMON: An overview of EDSE

    NASA Technical Reports Server (NTRS)

    Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.

    1992-01-01

    EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.

  2. Cellular Dynamic Simulator: An Event Driven Molecular Simulation Environment for Cellular Physiology

    PubMed Central

    Byrne, Michael J.; Waxham, M. Neal; Kubota, Yoshihisa

    2010-01-01

    In this paper, we present the Cellular Dynamic Simulator (CDS) for simulating diffusion and chemical reactions within crowded molecular environments. CDS is based on a novel event driven algorithm specifically designed for precise calculation of the timing of collisions, reactions and other events for each individual molecule in the environment. Generic mesh based compartments allow the creation / importation of very simple or detailed cellular structures that exist in a 3D environment. Multiple levels of compartments and static obstacles can be used to create a dense environment to mimic cellular boundaries and the intracellular space. The CDS algorithm takes into account volume exclusion and molecular crowding that may impact signaling cascades in small sub-cellular compartments such as dendritic spines. With the CDS, we can simulate simple enzyme reactions; aggregation, channel transport, as well as highly complicated chemical reaction networks of both freely diffusing and membrane bound multi-protein complexes. Components of the CDS are generally defined such that the simulator can be applied to a wide range of environments in terms of scale and level of detail. Through an initialization GUI, a simple simulation environment can be created and populated within minutes yet is powerful enough to design complex 3D cellular architecture. The initialization tool allows visual confirmation of the environment construction prior to execution by the simulator. This paper describes the CDS algorithm, design implementation, and provides an overview of the types of features available and the utility of those features are highlighted in demonstrations. PMID:20361275

  3. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  4. An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm

    SciTech Connect

    Donev, A; Garcia, A L; Alder, B J

    2007-07-30

    A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.

  5. NEVESIM: event-driven neural simulation framework with a Python interface.

    PubMed

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291

  6. NEVESIM: event-driven neural simulation framework with a Python interface

    PubMed Central

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291

  7. Self-Adaptive Event-Driven Simulation of Multi-Scale Plasma Systems

    NASA Astrophysics Data System (ADS)

    Omelchenko, Yuri; Karimabadi, Homayoun

    2005-10-01

    Multi-scale plasmas pose a formidable computational challenge. The explicit time-stepping models suffer from the global CFL restriction. Efficient application of adaptive mesh refinement (AMR) to systems with irregular dynamics (e.g. turbulence, diffusion-convection-reaction, particle acceleration etc.) may be problematic. To address these issues, we developed an alternative approach to time stepping: self-adaptive discrete-event simulation (DES). DES has origin in operations research, war games and telecommunications. We combine finite-difference and particle-in-cell techniques with this methodology by assuming two caveats: (1) a local time increment, dt for a discrete quantity f can be expressed in terms of a physically meaningful quantum value, df; (2) f is considered to be modified only when its change exceeds df. Event-driven time integration is self-adaptive as it makes use of causality rules rather than parametric time dependencies. This technique enables asynchronous flux-conservative update of solution in accordance with local temporal scales, removes the curse of the global CFL condition, eliminates unnecessary computation in inactive spatial regions and results in robust and fast parallelizable codes. It can be naturally combined with various mesh refinement techniques. We discuss applications of this novel technology to diffusion-convection-reaction systems and hybrid simulations of magnetosonic shocks.

  8. Efficient event-driven simulations shed new light on microtubule organization in the plant cortical array

    NASA Astrophysics Data System (ADS)

    Tindemans, Simon H.; Deinum, Eva E.; Lindeboom, Jelmer J.; Mulder, Bela M.

    2014-04-01

    The dynamics of the plant microtubule cytoskeleton is a paradigmatic example of the complex spatiotemporal processes characterising life at the cellular scale. This system is composed of large numbers of spatially extended particles, each endowed with its own intrinsic stochastic dynamics, and is capable of non-equilibrium self-organisation through collisional interactions of these particles. To elucidate the behaviour of such a complex system requires not only conceptual advances, but also the development of appropriate computational tools to simulate it. As the number of parameters involved is large and the behaviour is stochastic, it is essential that these simulations be fast enough to allow for an exploration of the phase space and the gathering of sufficient statistics to accurately pin down the average behaviour as well as the magnitude of fluctuations around it. Here we describe a simulation approach that meets this requirement by adopting an event-driven methodology that encompasses both the spontaneous stochastic changes in microtubule state as well as the deterministic collisions. In contrast with finite time step simulations this technique is intrinsically exact, as well as several orders of magnitude faster, which enables ordinary PC hardware to simulate systems of ˜ 10^3 microtubules on a time scale ˜ 10^{3} faster than real time. In addition we present new tools for the analysis of microtubule trajectories on curved surfaces. We illustrate the use of these methods by addressing a number of outstanding issues regarding the importance of various parameters on the transition from an isotropic to an aligned and oriented state.

  9. A new concept for simulation of vegetated land surface dynamics - Part 1: The event driven phenology model

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.

    2012-01-01

    Phenologies of the vegetated land surface are being used increasingly for diagnosis and prognosis of climate change consequences. Current prospective and retrospective phenological models stand far apart in their approaches to the subject. We report on an exploratory attempt to implement a phenological model based on a new event driven concept which has both diagnostic and prognostic capabilities in the same modeling framework. This Event Driven Phenological Model (EDPM) is shown to simulate land surface phenologies and phenophase transition dates in agricultural landscapes based on assimilation of weather data and land surface observations from spaceborne sensors. The model enables growing season phenologies to develop in response to changing environmental conditions and disturbance events. It also has the ability to ingest remotely sensed data to adjust its output to improve representation of the modeled variable. We describe the model and report results of initial testing of the EDPM using Level 2 flux tower records from the Ameriflux sites at Mead, Nebraska, USA, and at Bondville, Illinois, USA. Simulating the dynamics of normalized difference vegetation index based on flux tower data, the predictions by the EDPM show good agreement (RMSE < 0.08; r2 > 0.8) for maize and soybean during several growing seasons at different locations. This study presents the EDPM used in the companion paper (Kovalskyy and Henebry, 2011) in a coupling scheme to estimate daily actual evapotranspiration over multiple growing seasons.

  10. A new concept for simulation of vegetated land surface dynamics - Part 1: The event driven phenology model

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.

    2011-05-01

    Phenologies of the vegetated land surface are being used increasingly for diagnosis and prognosis of climate change consequences. Current prospective and retrospective phenological models stand far apart in their approaches to the subject. We report on an exploratory attempt to implement a phenological model based on a new event driven concept which has both diagnostic and prognostic capabilities in the same modeling framework. This Event Driven Phenological Model (EDPM) is shown to simulate land surface phenologies and phenophase transition dates in agricultural landscapes based on assimilation of weather data and land surface observations from spaceborne sensors. The model enables growing season phenologies to develop in response to changing environmental conditions and disturbance events. It also has the ability to ingest remotely sensed data to adjust its output to improve representation of the modeled variable. We describe the model and report results of initial testing of the EDPM using Level 2 flux tower records from the Ameriflux sites at Mead, Nebraska, USA, and at Bondville, Illinois, USA. Simulating the dynamics of normalized difference vegetation index based on flux tower data, the predictions by the EDPM show good agreement (RMSE < 0.08; r2>0.8) for maize and soybean during several growing seasons at different locations. This study presents the EDPM used in the companion paper (Kovalskyy and Henebry, 2011) in a coupling scheme to estimate daily actual evapotranspiration over multiple growing seasons.

  11. A combined Event-Driven/Time-Driven molecular dynamics algorithm for the simulation of shock waves in rarefied gases

    SciTech Connect

    Valentini, Paolo Schwartzentruber, Thomas E.

    2009-12-10

    A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between {rho}=10{sup -4}kg/m{sup 3} and {rho}=10{sup -1}kg/m{sup 3}, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.

  12. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-02-28

    We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  13. Anticipating the Chaotic Behaviour of Industrial Systems Based on Stochastic, Event-Driven Simulations

    NASA Astrophysics Data System (ADS)

    Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra

    2004-08-01

    In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.

  14. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-08-30

    We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  15. Stochastic Event-Driven Molecular Dynamics

    SciTech Connect

    Donev, Aleksandar Garcia, Alejandro L.; Alder, Berni J.

    2008-02-01

    A novel Stochastic Event-Driven Molecular Dynamics (SEDMD) algorithm is developed for the simulation of polymer chains suspended in a solvent. SEDMD combines event-driven molecular dynamics (EDMD) with the Direct Simulation Monte Carlo (DSMC) method. The polymers are represented as chains of hard-spheres tethered by square wells and interact with the solvent particles with hard-core potentials. The algorithm uses EDMD for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in EDMD, rather, the momentum and energy exchange in the solvent is determined stochastically using DSMC. The coupling between the solvent and the solute is consistently represented at the particle level retaining hydrodynamic interactions and thermodynamic fluctuations. However, unlike full MD simulations of both the solvent and the solute, in SEDMD the spatial structure of the solvent is ignored. The SEDMD algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard-wall subjected to uniform shear. SEDMD closely reproduces results obtained using traditional EDMD simulations with two orders of magnitude greater efficiency. Results question the existence of periodic (cycling) motion of the polymer chain.

  16. A General and Efficient Method for Incorporating Precise Spike Times in Globally Time-Driven Simulations

    PubMed Central

    Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus

    2010-01-01

    Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031

  17. Event-Driven Process Chains (EPC)

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    This chapter provides a comprehensive overview of Event-driven Process Chains (EPCs) and introduces a novel definition of EPC semantics. EPCs became popular in the 1990s as a conceptual business process modeling language in the context of reference modeling. Reference modeling refers to the documentation of generic business operations in a model such as service processes in the telecommunications sector, for example. It is claimed that reference models can be reused and adapted as best-practice recommendations in individual companies (see [230, 168, 229, 131, 400, 401, 446, 127, 362, 126]). The roots of reference modeling can be traced back to the Kölner Integrationsmodell (KIM) [146, 147] that was developed in the 1960s and 1970s. In the 1990s, the Institute of Information Systems (IWi) in Saarbrücken worked on a project with SAP to define a suitable business process modeling language to document the processes of the SAP R/3 enterprise resource planning system. There were two results from this joint effort: the definition of EPCs [210] and the documentation of the SAP system in the SAP Reference Model (see [92, 211]). The extensive database of this reference model contains almost 10,000 sub-models: 604 of them non-trivial EPC business process models. The SAP Reference model had a huge impact with several researchers referring to it in their publications (see [473, 235, 127, 362, 281, 427, 415]) as well as motivating the creation of EPC reference models in further domains including computer integrated manufacturing [377, 379], logistics [229] or retail [52]. The wide-spread application of EPCs in business process modeling theory and practice is supported by their coverage in seminal text books for business process management and information systems in general (see [378, 380, 49, 384, 167, 240]). EPCs are frequently used in practice due to a high user acceptance [376] and extensive tool support. Some examples of tools that support EPCs are ARIS Toolset by IDS

  18. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  19. Feasibility study for a generalized gate logic software simulator

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.

    1983-01-01

    Unit-delay simulation, event driven simulation, zero-delay simulation, simulation techniques, 2-valued versus multivalued logic, network initialization, gate operations and alternate network representations, parallel versus serial mode simulation fault modelling, extension of multiprocessor systems, and simulation timing are discussed. Functional level networks, gate equivalent circuits, the prototype BDX-930 network model, fault models, identifying detected faults for BGLOSS are discussed. Preprocessor tasks, postprocessor tasks, executive tasks, and a library of bliss coded macros for GGLOSS are also discussed.

  20. An Event Driven Phenology Model: Results from initial testing

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G.

    2008-12-01

    A new model was developed to simulate land surface phenology and seasonality dynamics based on assimilation of weather data and operational land surface observations. The model was built to establish a much needed interface between boundary layer dynamics represented in weather models and surface observations and the temporal variability captured in series of remotely sensed images. The Event Driven Phenology Model (EDPM) takes advantage of both ground-based weather and climate records and spaceborne sensors to provide retrospective and predictive power. The new model enables phenologies to unfold in response to changing environmental conditions and disturbance events. It also has the ability to ingest contemporaneous discrete external records of land surface dynamics to adjust its output to achieve a better representation of the observed process. The EDPM presents an alternative to contemporary methods such as retrospective curve-fitting (either to time or to a temporal proxy, such as thermal time), long term averages (or climatologies), and phenologies from look-up tables based on land use/land cover or plant functional types. We describe the model and report results of initial testing of the EDPM using level 1 flux tower records from the Mead, Nebraska Ameriflux sites in conjunction with associated MODIS subsets from the Carbon DAAC at ORNL. We assess the EDPM predictions by comparing and contrasting the results with reserved ground records and with outcomes of other phenology models. Finally, we point out prospects for future use of descriptive and prescriptive EDPM capabilities in the work of climate models, production of continuous remote sensing records, and other scientific applications.

  1. Two-ball problem revisited: Limitations of event-driven modeling

    NASA Astrophysics Data System (ADS)

    Müller, Patric; Pöschel, Thorsten

    2011-04-01

    The main precondition of simulating systems of hard particles by means of event-driven modeling is the assumption of instantaneous collisions. The aim of this paper is to quantify the deviation of event-driven modeling from the solution of Newton’s equation of motion using a paradigmatic example: If a tennis ball is held above a basketball with their centers vertically aligned, and the balls are released to collide with the floor, the tennis ball may rebound at a surprisingly high speed. We show in this article that the simple textbook explanation of this effect is an oversimplification, even for the limit of perfectly elastic particles. Instead, there may occur a rather complex scenario including multiple collisions which may lead to a very different final velocity as compared with the velocity resulting from the oversimplified model.

  2. General Data Simulation Program.

    ERIC Educational Resources Information Center

    Burns, Edward

    Described is a computer program written in FORTRAN IV which offers considerable flexibility in generating simulated data pertinent to education and educational psychology. The user is allowed to specify the number of samples, data sets, and variables, together with the population means, standard deviations and intercorrelations. In addition the…

  3. Event management for large scale event-driven digital hardware spiking neural networks.

    PubMed

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. PMID:23522624

  4. A tutorial on creating logfiles for event-driven applications.

    PubMed

    Breinholt, G; Krueger, H

    1999-08-01

    This paper describes the practical steps necessary to write logfiles for recording user actions in event-driven applications. Data logging has long been used as a reliable method to record all user actions, whether assessing new software or running a behavioral experiment. With the widespread introduction of event-driven software, the logfile must enable accurate recording of all the user's actions, whether with the keyboard or another input device. Logging is only an effective tool when it can accurately and consistently record all actions in a format that aids the extraction of useful information from the mass of data collected. Logfiles are often presented as one of many methods that could be used, and here a technique is proposed for the construction of logfiles for the quantitative assessment of software from the user's point of view. PMID:10502862

  5. Event-Driven Cyberinfrastructure Technologies Supporting the Disaster Life Cycle

    NASA Astrophysics Data System (ADS)

    Graves, S. J.; Maskey, M.; Keiser, K.

    2014-12-01

    The cyberinfrastructure components to be discussed include Event-Driven Data Delivery (ED3) and Data Albums. These are complementary technologies that combine to provide comprehensive access to timely and relevant data for disaster events. ED3 provides a cyber framework that allows situational awareness and decision systems to prepare data plans consisting of data access, generation, workflows, etc., that meet the users' data needs in the event of a future disaster event. Data Albums provides a resulting container of relevant data and functionality for an overall information package for a specific event. The combination of these technologies provides useful data capabilities as part of the disaster life cycle.

  6. Intelligent fuzzy controller for event-driven real time systems

    NASA Technical Reports Server (NTRS)

    Grantner, Janos; Patyra, Marek; Stachowicz, Marian S.

    1992-01-01

    Most of the known linguistic models are essentially static, that is, time is not a parameter in describing the behavior of the object's model. In this paper we show a model for synchronous finite state machines based on fuzzy logic. Such finite state machines can be used to build both event-driven, time-varying, rule-based systems and the control unit section of a fuzzy logic computer. The architecture of a pipelined intelligent fuzzy controller is presented, and the linguistic model is represented by an overall fuzzy relation stored in a single rule memory. A VLSI integrated circuit implementation of the fuzzy controller is suggested. At a clock rate of 30 MHz, the controller can perform 3 MFLIPS on multi-dimensional fuzzy data.

  7. Multirate and event-driven Kalman filters for helicopter flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Smith, Phillip; Suorsa, Raymond E.; Hussien, Bassam

    1993-01-01

    A vision-based obstacle detection system that provides information about objects as a function of azimuth and elevation is discussed. The range map is computed using a sequence of images from a passive sensor, and an extended Kalman filter is used to estimate range to obstacles. The magnitude of the optical flow that provides measurements for each Kalman filter varies significantly over the image depending on the helicopter motion and object location. In a standard Kalman filter, the measurement update takes place at fixed intervals. It may be necessary to use a different measurement update rate in different parts of the image in order to maintain the same signal to noise ratio in the optical flow calculations. A range estimation scheme that accepts the measurement only under certain conditions is presented. The estimation results from the standard Kalman filter are compared with results from a multirate Kalman filter and an event-driven Kalman filter for a sequence of helicopter flight images.

  8. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors

    PubMed Central

    Nie, Kaiming; Wang, Xinlei; Qiao, Jun; Xu, Jiangtao

    2016-01-01

    This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD) image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM). The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs) are used to quantize the time of photons’ arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor’s resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip’s output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5–20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes. PMID:26828490

  9. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors.

    PubMed

    Nie, Kaiming; Wang, Xinlei; Qiao, Jun; Xu, Jiangtao

    2016-01-01

    This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD) image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM). The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs) are used to quantize the time of photons' arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor's resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip's output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5-20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes. PMID:26828490

  10. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable

  11. General Purpose Heat Source Simulator

    NASA Technical Reports Server (NTRS)

    Emrich, William J., Jr.

    2008-01-01

    The General Purpose Heat Source (GPHS) project seeks to combine the development of an electrically heated, single GPHS module simulator with the evaluation of potential nuclear surface power systems. The simulator is designed to match the form, fit, and function of actual GPHS modules which normally generate heat through the radioactive decay of Pu238. The use of electrically heated modules rather than modules containing Pu238 facilitates the testing of the subsystems and systems without sacrificing the quantity and quality of the test data gathered. Current GPHS activities are centered on developing robust heater designs with sizes and weights which closely match those of actual Pu238 fueled GPHS blocks. Designs are being pursued which will allow operation up to 1100 C.

  12. Event Driven Messaging with Role-Based Subscriptions

    NASA Technical Reports Server (NTRS)

    Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed

    2009-01-01

    Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).

  13. Event-driven contrastive divergence for spiking neuromorphic systems

    PubMed Central

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2014-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality. PMID:24574952

  14. A hybrid adaptive routing algorithm for event-driven wireless sensor networks.

    PubMed

    Figueiredo, Carlos M S; Nakamura, Eduardo F; Loureiro, Antonio A F

    2009-01-01

    Routing is a basic function in wireless sensor networks (WSNs). For these networks, routing algorithms depend on the characteristics of the applications and, consequently, there is no self-contained algorithm suitable for every case. In some scenarios, the network behavior (traffic load) may vary a lot, such as an event-driven application, favoring different algorithms at different instants. This work presents a hybrid and adaptive algorithm for routing in WSNs, called Multi-MAF, that adapts its behavior autonomously in response to the variation of network conditions. In particular, the proposed algorithm applies both reactive and proactive strategies for routing infrastructure creation, and uses an event-detection estimation model to change between the strategies and save energy. To show the advantages of the proposed approach, it is evaluated through simulations. Comparisons with independent reactive and proactive algorithms show improvements on energy consumption. PMID:22423207

  15. Multiagent Attitude Control System for Satellites Based in Momentum Wheels and Event-Driven Synchronization

    NASA Astrophysics Data System (ADS)

    Garcia, Juan L.; Moreno, Jose Sanchez

    2012-12-01

    Attitude control is a requirement always present in spacecraft design. Several kinds of actuators exist to accomplish this control, being momentum wheels one of the most employed. Usually satellites carry redundant momentum wheels to handle any possible single failure, but the controller remains as a single centralized element, posing problems in case of failures. In this work a decentralized agent-based event-driven algorithm for attitude control is presented as a possible solution. Several agents based in momentum wheels will interact among them to accomplish the satellite control. A simulation environment has been developed to analyze the behavior of this architecture. This environment has been made available through the web page http://www.dia.uned.es.

  16. General Purpose Heat Source Simulator

    NASA Technical Reports Server (NTRS)

    Emrich, Bill

    2008-01-01

    The General Purpose Heat Source (GPHS) simulator project is designed to replicate through the use of electrical heaters, the form, fit, and function of actual GPHS modules which generate heat through the radioactive decay of Pu238. The use of electrically heated modules rather than modules containing Pu238 facilitates the testing of spacecraft subsystems and systems without sacrificing the quantity and quality of the test data gathered. Previous GPHS activities are centered around developing robust heater designs with sizes and weights that closely matched those of actual Pu238 fueled GPHS blocks. These efforts were successful, although their maximum temperature capabilities were limited to around 850 C. New designs are being pursued which also replicate the sizes and weights of actual Pu238 fueled GPHS blocks but will allow operation up to 1100 C.

  17. Generalized Fluid System Simulation Program

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok Kumar (Inventor); Bailey, John W. (Inventor); Schallhorn, Paul Alan (Inventor); Steadman, Todd E. (Inventor)

    2004-01-01

    A general purpose program implemented on a computer analyzes steady state and transient flow in a complex fluid network, modeling phase changes, compressibility, mixture thermodynamics and external body forces such as gravity and centrifugal force. A preprocessor provides for the inter- active development of a fluid network simulation having nodes and branches. Mass, energy, and specie conservation equations are solved at the nodes, and momentum conservation equations are solved in the branches. Contained herein are subroutines for computing "real fluid" thermodynamic and thermophysical properties for 12 fluids, and a number of different source options are provided for model- ing momentum sources or sinks in the branches. The system of equations describing the fluid network is solved by a hybrid numerical method that is a combination of the Newton-Raphson and successive substitution methods. Application and verification of this invention are provided through an example problem, which demonstrates that the predictions of the present invention compare most reasonably with test data.

  18. General Reactive Atomistic Simulation Program

    Energy Science and Technology Software Center (ESTSC)

    2004-09-22

    GRASP (General Reactive Atomistic Simulation Program) is primarily intended as a molecular dynamics package for complex force fields, The code is designed to provide good performance for large systems, either in parallel or serial execution mode, The primary purpose of the code is to realistically represent the structural and dynamic properties of large number of atoms on timescales ranging from picoseconds up to a microsecond. Typically the atoms form a representative sample of some material,more » such as an interface between polycrystalline silicon and amorphous silica. GRASP differs from other parallel molecular dynamics codes primarily due to it’s ability to handle relatively complicated interaction potentials and it’s ability to use more than one interaction potential in a single simulation. Most of the computational effort goes into the calculation of interatomic forces, which depend in a complicated way on the positions of all the atoms. The forces are used to integrate the equations of motion forward in time using the so-called velocity Verlet integration scheme. Alternatively, the forces can be used to find a minimum energy configuration, in which case a modified steepest descent algorithm is used.« less

  19. Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets.

    PubMed

    Pérez-Carrasco, José Antonio; Zhao, Bo; Serrano, Carmen; Acha, Begoña; Serrano-Gotarredona, Teresa; Chen, Shouchun; Linares-Barranco, Bernabé

    2013-11-01

    Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules. PMID:24051730

  20. Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate-Coding and Coincidence Processing. Application to Feed Forward ConvNets.

    PubMed

    Perez-Carrasco, J A; Zhao, B; Serrano, C; Acha, B; Serrano-Gotarredona, T; Chen, S; Linares-Barranco, B

    2013-04-10

    Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at “frame rate”. Event-driven vision sensors take inspiration from biology. A special type of Event-driven sensor is the so called Dynamic-Vision-Sensor (DVS) where each pixel computes relative changes of light, or “temporal contrast”. Pixel events become available with micro second delays with respect to “reality”. These events can be processed “as they flow” by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper we present a methodology for mapping from a properly trained neural network in a conventional Frame-driven representation, to an Event-driven representation. The method is illustrated by studying Event-driven Convolutional Neural Networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The Event-driven ConvNet is fed with recordings obtained from a real DVS camera. The Event-driven ConvNet is simulated with a dedicated Event-driven simulator, and consists of a number of Event-driven processing modules the characteristics of which are obtained from individually manufactured hardware modules. PMID:23589589

  1. Flow simulation on generalized grids

    SciTech Connect

    Koomullil, R.P.; Soni, B.K.; Huang, Chi Ti

    1996-12-31

    A hybrid grid generation methodology and flow simulation on grids having an arbitrary number of sided polygons is presented. A hyperbolic type marching scheme is used for generating structured grids near the solid boundaries. A local elliptic solver is utilized for smoothing the grid lines and for avoiding grid line crossing. A new method for trimming the overlaid structured grid is presented. Delaunay triangulation is employed to generate an unstructured grid in the regions away from the body. The structured and unstructured grid regions are integrated together to form a single grid for the flow solver. An edge based data structure is used to store the grid information to ease the handling of general polygons. Integral form of the Navier-Stokes equations makes up the governing equations. A Roe averaged Riemann solver is utilized to evaluate the numerical flux at cell faces. Higher order accuracy is achieved by applying Taylor`s series expansion to the conserved variables, and the gradient is calculated by using Green`s theorem. For the implicit scheme, the sparse matrix resulting from the linearization is solved using GMRES method. The flux Jacobians are calculated numerically or by an approximate analytic method. Results are presented to validate the current methodology.

  2. Gaming Simulation: A General Classification.

    ERIC Educational Resources Information Center

    Cecchini, Arnaldo; Frisenna, Adriana

    1987-01-01

    Reviews the problems of classifying gaming techniques and suggests a heuristic approach as one solution. Definitions of simulation, models, role, and game and play are discussed to help develop a classification based on a technique called gaming simulation. (Author/LRW)

  3. Event-Driven X-Ray CCD Detectors for High Energy Astrophysics

    NASA Technical Reports Server (NTRS)

    Ricker, George R.

    2004-01-01

    A viewgraph presentation describing the Event-Driven X- Ray CCD (EDCCD) detector system for high energy astrophysics is presented. The topics include: 1) EDCCD: Description and Advantages; 2) Summary of Grant Activity Carried Out; and 3) EDCCD Test System.

  4. Event-driven model predictive control of sewage pumping stations for sulfide mitigation in sewer networks.

    PubMed

    Liu, Yiqi; Ganigué, Ramon; Sharma, Keshab; Yuan, Zhiguo

    2016-07-01

    Chemicals such as Mg(OH)2 and iron salts are widely dosed to sewage for mitigating sulfide-induced corrosion and odour problems in sewer networks. The chemical dosing rate is usually not automatically controlled but profiled based on experience of operators, often resulting in over- or under-dosing. Even though on-line control algorithms for chemical dosing in single pipes have been developed recently, network-wide control algorithms are currently not available. The key challenge is that a sewer network is typically wide-spread comprising many interconnected sewer pipes and pumping stations, making network-wide sulfide mitigation with a relatively limited number of dosing points challenging. In this paper, we propose and demonstrate an Event-driven Model Predictive Control (EMPC) methodology, which controls the flows of sewage streams containing the dosed chemical to ensure desirable distribution of the dosed chemical throughout the pipe sections of interests. First of all, a network-state model is proposed to predict the chemical concentration in a network. An EMPC algorithm is then designed to coordinate sewage pumping station operations to ensure desirable chemical distribution in the network. The performance of the proposed control methodology is demonstrated by applying the designed algorithm to a real sewer network simulated with the well-established SeweX model using real sewage flow and characteristics data. The EMPC strategy significantly improved the sulfide mitigation performance with the same chemical consumption, compared to the current practice. PMID:27124127

  5. Field Evaluation of a General Purpose Simulator.

    ERIC Educational Resources Information Center

    Spangenberg, Ronald W.

    The use of a general purpose simulator (GPS) to teach Air Force technicians diagnostic and repair procedures for specialized aircraft radar systems is described. An EC II simulator manufactured by Educational Computer Corporation was adapted to resemble the actual configuration technicians would encounter in the field. Data acquired in the…

  6. Simulations in generalized ensembles through noninstantaneous switches.

    PubMed

    Giovannelli, Edoardo; Cardini, Gianni; Chelli, Riccardo

    2015-10-01

    Generalized-ensemble simulations, such as replica exchange and serial generalized-ensemble methods, are powerful simulation tools to enhance sampling of free energy landscapes in systems with high energy barriers. In these methods, sampling is enhanced through instantaneous transitions of replicas, i.e., copies of the system, between different ensembles characterized by some control parameter associated with thermodynamical variables (e.g., temperature or pressure) or collective mechanical variables (e.g., interatomic distances or torsional angles). An interesting evolution of these methodologies has been proposed by replacing the conventional instantaneous (trial) switches of replicas with noninstantaneous switches, realized by varying the control parameter in a finite time and accepting the final replica configuration with a Metropolis-like criterion based on the Crooks nonequilibrium work (CNW) theorem. Here we revise these techniques focusing on their correlation with the CNW theorem in the framework of Markovian processes. An outcome of this report is the derivation of the acceptance probability for noninstantaneous switches in serial generalized-ensemble simulations, where we show that explicit knowledge of the time dependence of the weight factors entering such simulations is not necessary. A generalized relationship of the CNW theorem is also provided in terms of the underlying equilibrium probability distribution at a fixed control parameter. Illustrative calculations on a toy model are performed with serial generalized-ensemble simulations, especially focusing on the different behavior of instantaneous and noninstantaneous replica transition schemes. PMID:26565367

  7. Simulations in generalized ensembles through noninstantaneous switches

    NASA Astrophysics Data System (ADS)

    Giovannelli, Edoardo; Cardini, Gianni; Chelli, Riccardo

    2015-10-01

    Generalized-ensemble simulations, such as replica exchange and serial generalized-ensemble methods, are powerful simulation tools to enhance sampling of free energy landscapes in systems with high energy barriers. In these methods, sampling is enhanced through instantaneous transitions of replicas, i.e., copies of the system, between different ensembles characterized by some control parameter associated with thermodynamical variables (e.g., temperature or pressure) or collective mechanical variables (e.g., interatomic distances or torsional angles). An interesting evolution of these methodologies has been proposed by replacing the conventional instantaneous (trial) switches of replicas with noninstantaneous switches, realized by varying the control parameter in a finite time and accepting the final replica configuration with a Metropolis-like criterion based on the Crooks nonequilibrium work (CNW) theorem. Here we revise these techniques focusing on their correlation with the CNW theorem in the framework of Markovian processes. An outcome of this report is the derivation of the acceptance probability for noninstantaneous switches in serial generalized-ensemble simulations, where we show that explicit knowledge of the time dependence of the weight factors entering such simulations is not necessary. A generalized relationship of the CNW theorem is also provided in terms of the underlying equilibrium probability distribution at a fixed control parameter. Illustrative calculations on a toy model are performed with serial generalized-ensemble simulations, especially focusing on the different behavior of instantaneous and noninstantaneous replica transition schemes.

  8. Connection between Newtonian simulations and general relativity

    SciTech Connect

    Chisari, Nora Elisa; Zaldarriaga, Matias

    2011-06-15

    On large scales, comparable to the horizon, the observable clustering properties of galaxies are affected by various general relativistic effects. To calculate these effects one needs to consistently solve for the metric, densities, and velocities in a specific coordinate system or gauge. The method of choice for simulating large-scale structure is numerical N-body simulations which are performed in the Newtonian limit. Even though one might worry that the use of the Newtonian approximation would make it impossible to use these simulations to compute properties on very large scales, we show that the simulations are still solving the dynamics correctly even for long modes and we give formulas to obtain the position of particles in the conformal Newtonian gauge given the positions computed in the simulation. We also give formulas to convert from the output coordinates of N-body simulations to the observable coordinates of the particles.

  9. Notification Event Architecture for Traveler Screening: Predictive Traveler Screening Using Event Driven Business Process Management

    ERIC Educational Resources Information Center

    Lynch, John Kenneth

    2013-01-01

    Using an exploratory model of the 9/11 terrorists, this research investigates the linkages between Event Driven Business Process Management (edBPM) and decision making. Although the literature on the role of technology in efficient and effective decision making is extensive, research has yet to quantify the benefit of using edBPM to aid the…

  10. A general software reliability process simulation technique

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1991-01-01

    The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.

  11. Spectral Methods in General Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Garrison, David

    2012-03-01

    In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.

  12. Simulation of General Physics laboratory exercise

    NASA Astrophysics Data System (ADS)

    Aceituno, P.; Hernández-Aceituno, J.; Hernández-Cabrera, A.

    2015-01-01

    Laboratory exercises are an important part of general Physics teaching, both during the last years of high school and the first year of college education. Due to the need to acquire enough laboratory equipment for all the students, and the widespread access to computers rooms in teaching, we propose the development of computer simulated laboratory exercises. A representative exercise in general Physics is the calculation of the gravity acceleration value, through the free fall motion of a metal ball. Using a model of the real exercise, we have developed an interactive system which allows students to alter the starting height of the ball to obtain different fall times. The simulation was programmed in ActionScript 3, so that it can be freely executed in any operative system; to ensure the accuracy of the calculations, all the input parameters of the simulations were modelled using digital measurement units, and to allow a statistical management of the resulting data, measurement errors are simulated through limited randomization.

  13. General Relativistic Magnetohydrodynamic Simulations of Collapsars

    NASA Technical Reports Server (NTRS)

    Mizuno, Yosuke; Yamada, S.; Koider, S.; Shipata, K.

    2005-01-01

    We have performed 2.5-dimensional general relativistic magnetohydrodynamic (MHD) simulations of collapsars including a rotating black hole. Initially, we assume that the core collapse has failed in this star. A rotating black hole of a few solar masses is inserted by hand into the calculation. The simulation results show the formation of a disklike structure and the generation of a jetlike outflow near the central black hole. The jetlike outflow propagates and accelerated mainly by the magnetic field. The total jet velocity is approximately 0.3c. When the rotation of the black hole is faster, the magnetic field is twisted strongly owing to the frame-dragging effect. The magnetic energy stored by the twisting magnetic field is directly converted to kinetic energy of the jet rather than propagating as an Alfven wave. Thus, as the rotation of the black hole becomes faster, the poloidal velocity of the jet becomes faster.

  14. A general formulation for compositional reservoir simulation

    SciTech Connect

    Rodriguez, F.; Guzman, J.; Galindo-Nava, A.

    1994-12-31

    In this paper the authors present a general formulation to solve the non-linear difference equations that arise in compositional reservoir simulation. The general approach here presented is based on newton`s method and provides a systematic approach to generate several formulations to solve the compositional problem, each possessing a different degree of implicitness and stability characteristics. The Fully-Implicit method is at the higher end of the implicitness spectrum while the IMPECS method, implicit in pressure-explicit in composition and saturation, is at the lower end. They show that all methods may be obtained as particular cases of the fully-implicit method. Regarding the matrix problem, all methods have a similar matrix structure; the composition of the Jacobian matrix is however unique in each case, being in some instances amenable to reductions for optimal solution of the matrix problem. Based on this, a different approach to derive IMPECS type methods is proposed; in this case, the whole set of 2nc + 6 equations, that apply in each gridblock, is reduced to a single pressure equation through matrix reduction operations; this provides a more stable numerical scheme, compared to other published IMPCS methods, in which the subset of thermodynamic equilibrium equations is arbitrarily decoupled form the set of gridblock equations to perform such reduction. The authors discuss how the general formulation here presented can be used to formulate and construct an adaptive-implicit compositional simulators. They also present results on the numerical performance of FI, IMPSEC and IMPECS methods on some test problems.

  15. Modeling the Energy Performance of Event-Driven Wireless Sensor Network by Using Static Sink and Mobile Sink

    PubMed Central

    Chen, Jiehui; Salim, Mariam B.; Matsumoto, Mitsuji

    2010-01-01

    Wireless Sensor Networks (WSNs) designed for mission-critical applications suffer from limited sensing capacities, particularly fast energy depletion. Regarding this, mobile sinks can be used to balance the energy consumption in WSNs, but the frequent location updates of the mobile sinks can lead to data collisions and rapid energy consumption for some specific sensors. This paper explores an optimal barrier coverage based sensor deployment for event driven WSNs where a dual-sink model was designed to evaluate the energy performance of not only static sensors, but Static Sink (SS) and Mobile Sinks (MSs) simultaneously, based on parameters such as sensor transmission range r and the velocity of the mobile sink v, etc. Moreover, a MS mobility model was developed to enable SS and MSs to effectively collaborate, while achieving spatiotemporal energy performance efficiency by using the knowledge of the cumulative density function (cdf), Poisson process and M/G/1 queue. The simulation results verified that the improved energy performance of the whole network was demonstrated clearly and our eDSA algorithm is more efficient than the static-sink model, reducing energy consumption approximately in half. Moreover, we demonstrate that our results are robust to realistic sensing models and also validate the correctness of our results through extensive simulations. PMID:22163503

  16. Event-driven charge-coupled device design and applications therefor

    NASA Technical Reports Server (NTRS)

    Doty, John P. (Inventor); Ricker, Jr., George R. (Inventor); Burke, Barry E. (Inventor); Prigozhin, Gregory Y. (Inventor)

    2005-01-01

    An event-driven X-ray CCD imager device uses a floating-gate amplifier or other non-destructive readout device to non-destructively sense a charge level in a charge packet associated with a pixel. The output of the floating-gate amplifier is used to identify each pixel that has a charge level above a predetermined threshold. If the charge level is above a predetermined threshold the charge in the triggering charge packet and in the charge packets from neighboring pixels need to be measured accurately. A charge delay register is included in the event-driven X-ray CCD imager device to enable recovery of the charge packets from neighboring pixels for accurate measurement. When a charge packet reaches the end of the charge delay register, control logic either dumps the charge packet, or steers the charge packet to a charge FIFO to preserve it if the charge packet is determined to be a packet that needs accurate measurement. A floating-diffusion amplifier or other low-noise output stage device, which converts charge level to a voltage level with high precision, provides final measurement of the charge packets. The voltage level is eventually digitized by a high linearity ADC.

  17. Event-Driven Observations and Comprehensive Evaluation for Natural Disaster Assessment in China

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Li, Z.; Shen, Y.; Wu, L.; Li, H.

    2012-08-01

    The Chinese event-driven observations and disaster assessment system is established so as to make information related to environmental risk and vulnerability easily accessible to decision-makers through this centralized platform. At 7:49 AM on April 14, 2010, an earthquake of 7.1 magnitude collapsed buildings in Yushu County, Yushu Tibetan Autonomous Prefecture in Qinghai Province. For emergence response, we presented a method for generating seismic intensity map based on seismological mechanism solutions. The disaster assessment system automatically drew the distribution map of affected population 1 hour after the Yushu earthquake. In the case of distribution map of affected population, we made the judgment that the Gyêgu town maybe the worst hit town in the Yushu earthquake because it is not only near the epicenter, but also the capital of Yushu Tibetan Autonomous Prefecture. Then event-driven observations are taken on Gyêgu town. Referring to usable data, the chains of rapid assessment on earthquake disaster were analyzed, and different models were established for assessing affected population, damaged houses and lifelines and comprehensive earthquake loss evaluation.

  18. EVENT DRIVEN AUTOMATIC STATE MODIFICATION OF BNL'S BOOSTER FOR NASA SPACE RADIATION LABORATORY SOLAR PARTICLE SIMULATOR.

    SciTech Connect

    BROWN, D.; BINELLO, S.; HARVEY, M.; MORRIS, J.; RUSEK, A.; TSOUPAS, N.

    2005-05-16

    The NASA Space Radiation Laboratory (NSRL) was constructed in collaboration with NASA for the purpose of performing radiation effect studies for the NASA space program. The NSRL makes use of heavy ions in the range of 0.05 to 3 GeV/n slow extracted from BNL's AGS Booster. NASA is interested in reproducing the energy spectrum from a solar flare in the space environment for a single ion species. To do this we have built and tested a set of software tools which allow the state of the Booster and the NSRL beam line to be changed automatically. In this report we will describe the system and present results of beam tests.

  19. Event-driven visual attention for the humanoid robot iCub.

    PubMed

    Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara

    2013-01-01

    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753

  20. Event-driven approach of layered multicast to network adaptation in RED-based IP networks

    NASA Astrophysics Data System (ADS)

    Nahm, Kitae; Li, Qing; Kuo, C.-C. J.

    2003-11-01

    In this work, we investigate the congestion control problem for layered video multicast in IP networks of active queue management (AQM) using a simple random early detection (RED) queue model. AQM support from networks improves the visual quality of video streaming but makes network adaptation more di+/-cult for existing layered video multicast proticols that use the event-driven timer-based approach. We perform a simplified analysis on the response of the RED algorithm to burst traffic. The analysis shows that the primary problem lies in the weak correlation between the network feedback and the actual network congestion status when the RED queue is driven by burst traffic. Finally, a design guideline of the layered multicast protocol is proposed to overcome this problem.

  1. Design of an Event-Driven Random-Access-Windowing CCD-Based Camera

    NASA Technical Reports Server (NTRS)

    Monacos, Steve P.; Lam, Raymond K.; Portillo, Angel A.; Ortiz, Gerardo G.

    2003-01-01

    Commercially available cameras are not design for the combination of single frame and high-speed streaming digital video with real-time control of size and location of multiple regions-of-interest (ROI). A new control paradigm is defined to eliminate the tight coupling between the camera logic and the host controller. This functionality is achieved by defining the indivisible pixel read out operation on a per ROI basis with in-camera time keeping capability. This methodology provides a Random Access, Real-Time, Event-driven (RARE) camera for adaptive camera control and is will suited for target tracking applications requiring autonomous control of multiple ROI's. This methodology additionally provides for reduced ROI read out time and higher frame rates compared to the original architecture by avoiding external control intervention during the ROI read out process.

  2. Event-driven visual attention for the humanoid robot iCub

    PubMed Central

    Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara

    2013-01-01

    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753

  3. FusionAnalyser: a new graphical, event-driven tool for fusion rearrangements discovery.

    PubMed

    Piazza, Rocco; Pirola, Alessandra; Spinelli, Roberta; Valletta, Simona; Redaelli, Sara; Magistroni, Vera; Gambacorti-Passerini, Carlo

    2012-09-01

    Gene fusions are common driver events in leukaemias and solid tumours; here we present FusionAnalyser, a tool dedicated to the identification of driver fusion rearrangements in human cancer through the analysis of paired-end high-throughput transcriptome sequencing data. We initially tested FusionAnalyser by using a set of in silico randomly generated sequencing data from 20 known human translocations occurring in cancer and subsequently using transcriptome data from three chronic and three acute myeloid leukaemia samples. in all the cases our tool was invariably able to detect the presence of the correct driver fusion event(s) with high specificity. In one of the acute myeloid leukaemia samples, FusionAnalyser identified a novel, cryptic, in-frame ETS2-ERG fusion. A fully event-driven graphical interface and a flexible filtering system allow complex analyses to be run in the absence of any a priori programming or scripting knowledge. Therefore, we propose FusionAnalyser as an efficient and robust graphical tool for the identification of functional rearrangements in the context of high-throughput transcriptome sequencing data. PMID:22570408

  4. GTOSS: Generalized Tethered Object Simulation System

    NASA Technical Reports Server (NTRS)

    Lang, David D.

    1987-01-01

    GTOSS represents a tether analysis complex which is described by addressing its family of modules. TOSS is a portable software subsystem specifically designed to be introduced into the environment of any existing vehicle dynamics simulation to add the capability of simulating multiple interacting objects (via multiple tethers). These objects may interact with each other as well as with the vehicle into whose environment TOSS is introduced. GTOSS is a stand alone tethered system analysis program, representing an example of TOSS having been married to a host simulation. RTOSS is the Results Data Base (RDB) subsystem designed to archive TOSS simulation results for future display processing. DTOSS is a display post processors designed to utilize the RDB. DTOSS extracts data from the RDB for multi-page printed time history displays. CTOSS is similar to DTOSS, but is designed to create ASCII plot files. The same time history data formats provided for DTOSS (for printing) are available via CTOSS for plotting. How these and other modules interact with each other is discussed.

  5. A 300-mV 220-nW event-driven ADC with real-time QRS detection for wearable ECG sensors.

    PubMed

    Zhang, Xiaoyang; Lian, Yong

    2014-12-01

    This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector. PMID:25608283

  6. Real-time gesture interface based on event-driven processing from stereo silicon retinas.

    PubMed

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael; Park, Paul K J; Shin, Chang-Woo; Ryu, Hyunsurk Eric; Kang, Byung Chang

    2014-12-01

    We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naïve users. PMID:25420246

  7. General Case Simulation Instruction of Generalized Housekeeping Skills in Blind, Multihandicapped Adults.

    ERIC Educational Resources Information Center

    Lengyel, L. M.; And Others

    1990-01-01

    The study with three blind and mentally retarded adults with additional disabilities found that general case simulation instruction in housekeeping skills led to generalization to untrained settings. Degree of generalization was inversely related to the severity and complexity of participant disability. (Author/DB)

  8. General Aviation Cockpit Weather Information System Simulation Studies

    NASA Technical Reports Server (NTRS)

    McAdaragh, Ray; Novacek, Paul

    2003-01-01

    This viewgraph presentation provides information on two experiments on the effectiveness of a cockpit weather information system on a simulated general aviation flight. The presentation covers the simulation hardware configuration, the display device screen layout, a mission scenario, conclusions, and recommendations. The second experiment, with its own scenario and conclusions, is a follow-on experiment.

  9. Test and evaluation of the generalized gate logic system simulator

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.

    1991-01-01

    The results of the initial testing of the Generalized Gate Level Logic Simulator (GGLOSS) are discussed. The simulator is a special purpose fault simulator designed to assist in the analysis of the effects of random hardware failures on fault tolerant digital computer systems. The testing of the simulator covers two main areas. First, the simulation results are compared with data obtained by monitoring the behavior of hardware. The circuit used for these comparisons is an incomplete microprocessor design based upon the MIL-STD-1750A Instruction Set Architecture. In the second area of testing, current simulation results are compared with experimental data obtained using precursors of the current tool. In each case, a portion of the earlier experiment is confirmed. The new results are then viewed from a different perspective in order to evaluate the usefulness of this simulation strategy.

  10. A Comparison of General Case In Vivo and General Case Simulation Plus In Vivo Training.

    ERIC Educational Resources Information Center

    McDonnell, John J.; Ferguson, Brad

    1988-01-01

    The study examined the relative effectiveness and efficiency of general case in vivo and general case simulation plus in vivo training in teaching six students with moderate and severe disabilities to purchase items in fast-food restaurants. Although both strategies led to reliable performance in nontrained settings, the in vivo instruction…

  11. Sampling of general correlators in worm-algorithm based simulations

    NASA Astrophysics Data System (ADS)

    Rindlisbacher, Tobias; Åkerlund, Oscar; de Forcrand, Philippe

    2016-08-01

    Using the complex ϕ4-model as a prototype for a system which is simulated by a worm algorithm, we show that not only the charged correlator <ϕ* (x) ϕ (y) >, but also more general correlators such as < | ϕ (x) | | ϕ (y) | > or < arg ⁡ (ϕ (x)) arg ⁡ (ϕ (y)) >, as well as condensates like < | ϕ | >, can be measured at every step of the Monte Carlo evolution of the worm instead of on closed-worm configurations only. The method generalizes straightforwardly to other systems simulated by worms, such as spin or sigma models.

  12. The development of an interim generalized gate logic software simulator

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Nemeroff, S.

    1985-01-01

    A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer.

  13. Simulations of accretion disks in pseudo-complex General Relativity

    NASA Astrophysics Data System (ADS)

    Hess, P. O.; Algalán B., M.; Schönenbach, T.; Greiner, W.

    2015-11-01

    After a summary on pseudo-complex General Relativity (pc-GR), circular orbits and stable orbits in general are discussed, including predictions compared to observations. Using a modified version of a model for accretions disks, presented by Page and Thorne in 1974, we apply the raytracing technique in order to simulate the appearance of an accretion disk as it should be observed in a detector. In pc-GR we predict a dark ring near a very massive, rapidly rotating object.

  14. The power of event-driven analytics in Large Scale Data Processing

    ScienceCinema

    None

    2011-04-25

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is

  15. The power of event-driven analytics in Large Scale Data Processing

    SciTech Connect

    2011-02-24

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is

  16. The architecture of Newton, a general-purpose dynamics simulator

    NASA Technical Reports Server (NTRS)

    Cremer, James F.; Stewart, A. James

    1989-01-01

    The architecture for Newton, a general-purpose system for simulating the dynamics of complex physical objects, is described. The system automatically formulates and analyzes equations of motion, and performs automatic modification of this system equations when necessitated by changes in kinematic relationships between objects. Impact and temporary contact are handled, although only using simple models. User-directed influence of simulations is achieved using Newton's module, which can be used to experiment with the control of many-degree-of-freedom articulated objects.

  17. Applications of a general thermal/hydraulic simulation tool

    NASA Technical Reports Server (NTRS)

    Cullimore, B. A.

    1989-01-01

    The analytic techniques, sample applications, and development status of a general-purpose computer program called SINDA '85/FLUINT (for systems improved numerical differencing analyzer, 1985 version with fluid integrator), designed for simulating thermal structures and internal fluid systems, are described, with special attention given to the applications of the fluid system capabilities. The underlying assumptions, methodologies, and modeling capabilities of the system are discussed. Sample applications include component-level and system-level simulations. A system-level analysis of a cryogenic storage system is presented.

  18. Generalized Fluid System Simulation Program (GFSSP) Version 6 - General Purpose Thermo-Fluid Network Analysis Software

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Leclair, Andre; Moore, Ric; Schallhorn, Paul

    2011-01-01

    GFSSP stands for Generalized Fluid System Simulation Program. It is a general-purpose computer program to compute pressure, temperature and flow distribution in a flow network. GFSSP calculates pressure, temperature, and concentrations at nodes and calculates flow rates through branches. It was primarily developed to analyze Internal Flow Analysis of a Turbopump Transient Flow Analysis of a Propulsion System. GFSSP development started in 1994 with an objective to provide a generalized and easy to use flow analysis tool for thermo-fluid systems.

  19. A General Relativistic Magnetohydrodynamic Simulation of Jet Formation

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Richardson, G.; Koide, S.; Shibata, K.; Kudoh, T.; Hardee, P.; Fishman, G. J.

    2005-05-01

    We have performed a fully three-dimensional general relativistic magnetohydrodynamic (GRMHD) simulation of jet formation from a thin accretion disk around a Schwarzschild black hole with a free-falling corona. The initial simulation results show that a bipolar jet (velocity ~0.3c) is created, as shown by previous two-dimensional axisymmetric simulations with mirror symmetry at the equator. The three-dimensional simulation ran over 100 light crossing time units (τS=rS/c, where rS≡2GM/c2), which is considerably longer than the previous simulations. We show that the jet is initially formed as predicted owing in part to magnetic pressure from the twisting of the initially uniform magnetic field and from gas pressure associated with shock formation in the region around r=3rS. At later times, the accretion disk becomes thick and the jet fades resulting in a wind that is ejected from the surface of the thickened (torus-like) disk. It should be noted that no streaming matter from a donor is included at the outer boundary in the simulation (an isolated black hole not binary black hole). The wind flows outward with a wider angle than the initial jet. The widening of the jet is consistent with the outward-moving torsional Alfvén waves. This evolution of disk-jet coupling suggests that the jet fades with a thickened accretion disk because of the lack of streaming material from an accompanying star.

  20. A General Mission Independent Simulator (GMIS) and Simulator Control Program (SCP)

    NASA Technical Reports Server (NTRS)

    Baker, Paul L.; Moore, J. Michael; Rosenberger, John

    1994-01-01

    GMIS is a general-purpose simulator for testing ground system software. GMIS can be adapted to any mission to simulate changes in the data state maintained by the mission's computers. GMIS was developed in Code 522 NASA Goddard Space Flight Center. The acronym GMIS stands for GOTT Mission Independent Simulator, where GOTT is the Ground Operations Technology Testbed. Within GOTT, GMIS is used to provide simulated data to an installation of TPOCC - the Transportable Payload Operations Control Center. TPOCC was developed by Code 510 as a reusable control center. GOTT uses GMIS and TPOCC to test new technology and new operator procedures.

  1. A High-Speed, Event-Driven, Active Pixel Sensor Readout for Photon-Counting Microchannel Plate Detectors

    NASA Technical Reports Server (NTRS)

    Kimble, Randy A.; Pain, Bedabrata; Norton, Timothy J.; Haas, J. Patrick; Oegerle, William R. (Technical Monitor)

    2002-01-01

    Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest of by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.

  2. A High-Speed, Event-Driven, Active Pixel Sensor Readout for Photon-Counting Microchannel Plate Detectors

    NASA Technical Reports Server (NTRS)

    Kimble, Randy A.; Pain, B.; Norton, T. J.; Haas, P.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution for the readout while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest or by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.

  3. Generalized directed loop method for quantum Monte Carlo simulations.

    PubMed

    Alet, Fabien; Wessel, Stefan; Troyer, Matthias

    2005-03-01

    Efficient quantum Monte Carlo update schemes called directed loops have recently been proposed, which improve the efficiency of simulations of quantum lattice models. We propose to generalize the detailed balance equations at the local level during the loop construction by accounting for the matrix elements of the operators associated with open world-line segments. Using linear programming techniques to solve the generalized equations, we look for optimal construction schemes for directed loops. This also allows for an extension of the directed loop scheme to general lattice models, such as high-spin or bosonic models. The resulting algorithms are bounce free in larger regions of parameter space than the original directed loop algorithm. The generalized directed loop method is applied to the magnetization process of spin chains in order to compare its efficiency to that of previous directed loop schemes. In contrast to general expectations, we find that minimizing bounces alone does not always lead to more efficient algorithms in terms of autocorrelations of physical observables, because of the nonuniqueness of the bounce-free solutions. We therefore propose different general strategies to further minimize autocorrelations, which can be used as supplementary requirements in any directed loop scheme. We show by calculating autocorrelation times for different observables that such strategies indeed lead to improved efficiency; however, we find that the optimal strategy depends not only on the model parameters but also on the observable of interest. PMID:15903632

  4. Architecture for event-driven real-time distributed computer systems

    SciTech Connect

    McDonald, J.E.

    1983-01-01

    The author describes a proposed preliminary system design that includes hardware and software for real-time distributed computer systems. This new system is appropriate as a digital avionics architecture or as a real-time multi-computer simulation system using a mixture of computers, mainframes to micros. The hardware contains a network that employs high-speed serial data transmission concepts in emulating a multicomputer shared memory system. The distributed multicomputer system then capitalizes on the attributes of the hardware by structuring the real-time software as the data-driven input-output system. The real-time software executes only on demand and not synchronously as in conventional real-time systems. Background information concerning multi-computer systems using serial and parallel data transmission networks is given. This information supports the design rationale of the proposed hardware system which is basically a technology blend of conventional serial and parallel transmission schemes. 2 references.

  5. Comparison of Cenozoic atmospheric general circulation model simulations

    SciTech Connect

    Barron, E.J.

    1985-01-01

    Paleocene, Eocene, Miocene and present day (with polar ice) geography are specified as the lower boundary condition in a mean annual, energy balance ocean version of the Community Climate Model (CCM), a spectral General Circulation Model of the Atmosphere developed at the National Center for Atmospheric Research. This version of the CCM has a 4.5/sup 0/ latitudinal and 7.5/sup 0/ longitudinal resolution with 9 vertical levels and includes predictions for pressure, winds, temperature, evaporation, precipitation, cloud cover, snow cover and sea ice. The model simulations indicate little geographically-induced climates changes from the Paleocene to the Miocene, but substantial differences between the Miocene and the present simulations. The simulated climate differences between the Miocene and present day include: 1) cooler present temperatures (2/sup 0/C in tropics, 15-35 C in polar latitudes) with the exception of warmer subtropical desert conditions, 2) a generally weaker present hydrologic cycle, with greater subtropical aridity, 3) strengthened present day westerly jets with a slight poleward displacement, and 4) the largest regional climate changes associated with Antarctica. The results of the climate model sensitivity experiments have considerable implications for understanding how geography influences climate.

  6. Automatic CT simulation optimization for radiation therapy: A general strategy

    SciTech Connect

    Li, Hua Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M.; Mutic, Sasa; Yu, Lifeng; Anastasio, Mark A.; Low, Daniel A.

    2014-03-15

    Purpose: In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. Methods: The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Results: Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube

  7. A General Relativistic Magnetohydrodynamic Simulation of Jet Formation

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Richardson, G.; Koide, S.; Shibata, K.; Kudoh, T.; Hardee, P.; Fishman, G. J.

    2005-01-01

    We have performed a fully three-dimensional general relativistic magnetohydrodynamic (GRMHD) simulation ofjet formation from a thin accretion disk around a Schwarzschild black hole with a free-falling corona. The initial simulation results show that a bipolar jet (velocity approx.0.3c) is created, as shown by previous two-dimensional axi- symmetric simulations with mirror symmetry at the equator. The three-dimensional simulation ran over 100 light crossing time units (T(sub s) = r(sub s)/c, where r(sub s = 2GM/c(sup 2), which is considerably longer than the previous simulations. We show that the jet is initially formed as predicted owing in part to magnetic pressure from the twisting of the initially uniform magnetic field and from gas pressure associated with shock formation in the region around r = 3r(sub s). At later times, the accretion disk becomes thick and the jet fades resulting in a wind that is ejected from the surface ofthe thickened (torus-like) disk. It should be noted that no streaming matter from a donor is included at the outer boundary in the simulation (an isolated black hole not binary black hole). The wind flows outward with a wider angle than the initial jet. The widening of the jet is consistent with the outward-moving torsional Alfven waves. This evolution of disk-jet coupling suggests that the jet fades with a thickened accretion disk because of the iack of streaming materiai from an accompanying star.

  8. Localized and generalized simulated wear of resin composites.

    PubMed

    Barkmeier, W W; Takamizawa, T; Erickson, R L; Tsujimoto, A; Latta, M; Miyazaki, M

    2015-01-01

    A laboratory study was conducted to examine the wear of resin composite materials using both a localized and generalized wear simulation model. Twenty specimens each of seven resin composites (Esthet•X HD [HD], Filtek Supreme Ultra [SU], Herculite Ultra [HU], SonicFill [SF], Tetric EvoCeram Bulk Fill [TB], Venus Diamond [VD], and Z100 Restorative [Z]) were subjected to a wear challenge of 400,000 cycles for both localized and generalized wear in a Leinfelder-Suzuki wear simulator (Alabama machine). The materials were placed in custom cylinder-shaped stainless steel fixtures. A stainless steel ball bearing (r=2.387 mm) was used as the antagonist for localized wear, and a stainless steel, cylindrical antagonist with a flat tip was used for generalized wear. A water slurry of polymethylmethacrylate (PMMA) beads was used as the abrasive media. A noncontact profilometer (Proscan 2100) with Proscan software was used to digitize the surface contours of the pretest and posttest specimens. AnSur 3D software was used for wear assessment. For localized testing, maximum facet depth (μm) and volume loss (mm(3)) were used to compare the materials. The mean depth of the facet surface (μm) and volume loss (mm(3)) were used for comparison of the generalized wear specimens. A one-way analysis of variance (ANOVA) and Tukey post hoc test were used for data analysis of volume loss for both localized and generalized wear, maximum facet depth for localized wear, and mean depth of the facet for generalized wear. The results for localized wear simulation were as follows [mean (standard deviation)]: maximum facet depth (μm)--Z, 59.5 (14.7); HU, 99.3 (16.3); SU, 102.8 (13.8); HD, 110.2 (13.3); VD, 114.0 (10.3); TB, 125.5 (12.1); SF, 195.9 (16.9); volume loss (mm(3))--Z, 0.013 (0.002); SU, 0.026 (0.006); HU, 0.043 (0.008); VD, 0.057 (0.009); HD, 0.058 (0.014); TB, 0.061 (0.010); SF, 0.135 (0.024). Generalized wear simulation results were as follows: mean depth of facet (μm)--Z, 9.3 (3

  9. The Speedster-EXD- A New Event-Driven Hybrid CMOS X-ray Detector

    NASA Astrophysics Data System (ADS)

    Griffith, Christopher V.; Falcone, Abraham D.; Prieskorn, Zachary R.; Burrows, David N.

    2016-01-01

    The Speedster-EXD is a new 64×64 pixel, 40-μm pixel pitch, 100-μm depletion depth hybrid CMOS x-ray detector with the capability of reading out only those pixels containing event charge, thus enabling fast effective frame rates. A global charge threshold can be specified, and pixels containing charge above this threshold are flagged and read out. The Speedster detector has also been designed with other advanced in-pixel features to improve performance, including a low-noise, high-gain capacitive transimpedance amplifier that eliminates interpixel capacitance crosstalk (IPC), and in-pixel correlated double sampling subtraction to reduce reset noise. We measure the best energy resolution on the Speedster-EXD detector to be 206 eV (3.5%) at 5.89 keV and 172 eV (10.0%) at 1.49 keV. The average IPC to the four adjacent pixels is measured to be 0.25%±0.2% (i.e., consistent with zero). The pixel-to-pixel gain variation is measured to be 0.80%±0.03%, and a Monte Carlo simulation is applied to better characterize the contributions to the energy resolution.

  10. Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models

    NASA Astrophysics Data System (ADS)

    Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido

    2016-06-01

    We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.

  11. Solute transport processes in flow-event-driven stream-aquifer interaction

    NASA Astrophysics Data System (ADS)

    Xie, Yueqing; Cook, Peter G.; Simmons, Craig T.

    2016-07-01

    The interaction between streams and groundwater controls key features of the stream hydrograph and chemograph. Since surface runoff is usually less saline than groundwater, flow events are usually accompanied by declines in stream salinity. In this paper, we use numerical modelling to show that, at any particular monitoring location: (i) the increase in stream stage associated with a flow event will precede the decrease in solute concentration (arrival time lag for solutes); and (ii) the decrease in stream stage following the flow peak will usually precede the subsequent return (increase) in solute concentration (return time lag). Both arrival time lag and return time lag increase with increasing wave duration. However, arrival time lag decreases with increasing wave amplitude, whereas return time lag increases. Furthermore, while arrival time lag is most sensitive to parameters that control river velocity (channel roughness and stream slope), return time lag is most sensitive to groundwater parameters (aquifer hydraulic conductivity, recharge rate, and dispersitivity). Additionally, the absolute magnitude of the decrease in river concentration is sensitive to both river and groundwater parameters. Our simulations also show that in-stream mixing is dominated by wave propagation and bank storage processes, and in-stream dispersion has a relatively minor effect on solute concentrations. This has important implications for spreading of contaminants released to streams. Our work also demonstrates that a high contribution of pre-event water (or groundwater) within the flow hydrograph can be caused by the combination of in-stream and bank storage exchange processes, and does not require transport of pre-event water through the catchment.

  12. An event-driven approach for studying gene block evolution in bacteria

    PubMed Central

    Ream, David C.; Bankapur, Asma R.; Friedberg, Iddo

    2015-01-01

    Motivation: Gene blocks are genes co-located on the chromosome. In many cases, gene blocks are conserved between bacterial species, sometimes as operons, when genes are co-transcribed. The conservation is rarely absolute: gene loss, gain, duplication, block splitting and block fusion are frequently observed. An open question in bacterial molecular evolution is that of the formation and breakup of gene blocks, for which several models have been proposed. These models, however, are not generally applicable to all types of gene blocks, and consequently cannot be used to broadly compare and study gene block evolution. To address this problem, we introduce an event-based method for tracking gene block evolution in bacteria. Results: We show here that the evolution of gene blocks in proteobacteria can be described by a small set of events. Those include the insertion of genes into, or the splitting of genes out of a gene block, gene loss, and gene duplication. We show how the event-based method of gene block evolution allows us to determine the evolutionary rateand may be used to trace the ancestral states of their formation. We conclude that the event-based method can be used to help us understand the formation of these important bacterial genomic structures. Availability and implementation: The software is available under GPLv3 license on http://github.com/reamdc1/gene_block_evolution.git. Supplementary online material: http://iddo-friedberg.net/operon-evolution Contact: i.friedberg@miamioh.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25717195

  13. Characterization and development of an event-driven hybrid CMOS x-ray detector

    NASA Astrophysics Data System (ADS)

    Griffith, Christopher

    2015-06-01

    Hybrid CMOS detectors (HCD) have provided great benefit to the infrared and optical fields of astronomy, and they are poised to do the same for X-ray astronomy. Infrared HCDs have already flown on the Hubble Space Telescope and the Wide-Field Infrared Survey Explorer (WISE) mission and are slated to fly on the James Webb Space Telescope (JWST). Hybrid CMOS X-ray detectors offer low susceptibility to radiation damage, low power consumption, and fast readout time to avoid pile-up. The fast readout time is necessary for future high throughput X-ray missions. The Speedster-EXD X-ray HCD presented in this dissertation offers new in-pixel features and reduces known noise sources seen on previous generation HCDs. The Speedster-EXD detector makes a great step forward in the development of these detectors for future space missions. This dissertation begins with an overview of future X-ray space mission concepts and their detector requirements. The background on the physics of semiconductor devices and an explanation of the detection of X-rays with these devices will be discussed followed by a discussion on CCDs and CMOS detectors. Next, hybrid CMOS X-ray detectors will be explained including their advantages and disadvantages. The Speedster-EXD detector and its new features will be outlined including its ability to only read out pixels which contain X-ray events. Test stand design and construction for the Speedster-EXD detector is outlined and the characterization of each parameter on two Speedster-EXD detectors is detailed including read noise, dark current, interpixel capacitance crosstalk (IPC), and energy resolution. Gain variation is also characterized, and a Monte Carlo simulation of its impact on energy resolution is described. This analysis shows that its effect can be successfully nullified with proper calibration, which would be important for a flight mission. Appendix B contains a study of the extreme tidal disruption event, Swift J1644+57, to search for

  14. A Distributed Laboratory for Event-Driven Coastal Prediction and Hazard Planning

    NASA Astrophysics Data System (ADS)

    Bogden, P.; Allen, G.; MacLaren, J.; Creager, G. J.; Flournoy, L.; Sheng, Y. P.; Graber, H.; Graves, S.; Conover, H.; Luettich, R.; Perrie, W.; Ramakrishnan, L.; Reed, D. A.; Wang, H. V.

    2006-12-01

    The 2005 Atlantic hurricane season was the most active in recorded history. Collectively, 2005 hurricanes caused more than 2,280 deaths and record damages of over 100 billion dollars. Of the storms that made landfall, Dennis, Emily, Katrina, Rita, and Wilma caused most of the destruction. Accurate predictions of storm-driven surge, wave height, and inundation can save lives and help keep recovery costs down, provided the information gets to emergency response managers in time. The information must be available well in advance of landfall so that responders can weigh the costs of unnecessary evacuation against the costs of inadequate preparation. The SURA Coastal Ocean Observing and Prediction (SCOOP) Program is a multi-institution collaboration implementing a modular, distributed service-oriented architecture for real time prediction and visualization of the impacts of extreme atmospheric events. The modular infrastructure enables real-time prediction of multi- scale, multi-model, dynamic, data-driven applications. SURA institutions are working together to create a virtual and distributed laboratory integrating coastal models, simulation data, and observations with computational resources and high speed networks. The loosely coupled architecture allows teams of computer and coastal scientists at multiple institutions to innovate complex system components that are interconnected with relatively stable interfaces. The operational system standardizes at the interface level to enable substantial innovation by complementary communities of coastal and computer scientists. This architectural philosophy solves a long-standing problem associated with the transition from research to operations. The SCOOP Program thereby implements a prototype laboratory consistent with the vision of a national, multi-agency initiative called the Integrated Ocean Observing System (IOOS). Several service- oriented components of the SCOOP enterprise architecture have already been designed and

  15. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    One of the largest continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available. Approaches used in Earth science research such as case study analysis and climatology studies involve gathering discovering and gathering diverse data sets and information to support the research goals. Research based on case studies involves a detailed description of specific weather events using data from different sources, to characterize physical processes in play for a specific event. Climatology-based research tends to focus on the representativeness of a given event, by studying the characteristics and distribution of a large number of events. This allows researchers to generalize characteristics such as spatio-temporal distribution, intensity, annual cycle, duration, etc. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the datasets of interest can obtain the specific files they need using these systems. However, in cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. In these cases, a search process needs to be organized around the event rather than observing instruments. In addition, the existing data systems assume users have sufficient knowledge regarding the domain vocabulary to be able to effectively utilize their catalogs. These systems do not support new or interdisciplinary researchers who may be unfamiliar with the domain terminology. This paper presents a specialized search, aggregation and curation tool for Earth science to address these existing

  16. A generalized Poisson solver for first-principles device simulations

    NASA Astrophysics Data System (ADS)

    Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost

    2016-01-01

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.

  17. A generalized Poisson solver for first-principles device simulations.

    PubMed

    Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated. PMID:26827208

  18. Better Space Construction Decisions by Instructional Program Simulation Utilizing the Generalized Academic Simulation Programs.

    ERIC Educational Resources Information Center

    Apker, Wesley

    This school district utilized the generalized academic simulation programs (GASP) to assist in making decisions regarding the kinds of facilities that should be constructed at Pilchuck Senior High School. Modular scheduling was one of the basic educational parameters used in determining the number and type of facilities needed. The objectives of…

  19. A General Simulator for Reaction-Based Biogeochemical Processes

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Yeh, George

    2006-02-01

    As more complex biogeochemical situations are being investigated (e.g., evolving reactivity, passivation of reactive surfaces, dissolution of sorbates), there is a growing need for biogeochemical simulators to flexibly and facilely address new reaction forms and rate laws. This paper presents an approach that accommodates this need to efficiently simulate general biogeochemical processes, while insulating the user from additional code development. The approach allows for the automatic extraction of fundamental reaction stoichiometry and thermodynamics from a standard chemistry database, and the symbolic entry of arbitrarily complex user-specified reaction forms, rate laws, and equilibria. The user-specified equilibrium and kinetic reactions (i.e., reactions not defined in the format of the standardized database) are interpreted by the Maple symbolic mathematical software package. FORTRAN 90 code is then generated by Maple for (1) the analytical Jacobian matrix (if preferred over the numerical Jacobian matrix) used in the Newton-Raphson solution procedure, and (2) the residual functions for user-specified equilibrium expressions and rate laws. Matrix diagonalization eliminates the need to conceptualize the system of reactions as a tableau, while identifying a minimum rank set of basis species with enhanced numerical convergence properties. The newly generated code, which is designed to operate in the BIOGEOCHEM biogeochemical simulator, is then compiled and linked into the BIOGEOCHEM executable. With these features, users can avoid recoding the simulator to accept new equilibrium expressions or kinetic rate laws, while still taking full advantage of the stoichiometry and thermodynamics provided by an existing chemical database. Thus, the approach introduces efficiencies in the specification of biogeochemical reaction networks and eliminates opportunities for mistakes in preparing input files and coding errors. Test problems are used to demonstrate the features of

  20. Event-driven, pattern-based methodology for cost-effective development of standardized personal health devices.

    PubMed

    Martínez-Espronceda, Miguel; Trigo, Jesús D; Led, Santiago; Barrón-González, H Gilberto; Redondo, Javier; Baquero, Alfonso; Serrano, Luis

    2014-11-01

    Experiences applying standards in personal health devices (PHDs) show an inherent trade-off between interoperability and costs (in terms of processing load and development time). Therefore, reducing hardware and software costs as well as time-to-market is crucial for standards adoption. The ISO/IEEE11073 PHD family of standards (also referred to as X73PHD) provides interoperable communication between PHDs and aggregators. Nevertheless, the responsibility of achieving inexpensive implementations of X73PHD in limited resource microcontrollers falls directly on the developer. Hence, the authors previously presented a methodology based on patterns to implement X73-compliant PHDs into devices with low-voltage low-power constraints. That version was based on multitasking, which required additional features and resources. This paper therefore presents an event-driven evolution of the patterns-based methodology for cost-effective development of standardized PHDs. The results of comparing between the two versions showed that the mean values of decrease in memory consumption and cycles of latency are 11.59% and 45.95%, respectively. In addition, several enhancements in terms of cost-effectiveness and development time can be derived from the new version of the methodology. Therefore, the new approach could help in producing cost-effective X73-compliant PHDs, which in turn could foster the adoption of standards. PMID:25123101

  1. Event driven executive

    NASA Technical Reports Server (NTRS)

    Tulpule, Bhalchandra R. (Inventor); Collins, Robert E. (Inventor); Cheetham, John (Inventor); Cornwell, Smith (Inventor)

    1990-01-01

    Tasks may be planned for execution on a single processor or are split up by the designer for execution among a plurality of signal processors. The tasks are modeled using a design aid called a precedence graph, from which a dependency table and a prerequisite table are established for reference within each processor. During execution, at the completion of a given task, an end of task interrupt is provided from any processor which has completed a task to any and all other processors including itself in which completion of that task is a prerequisite for commencement of any dependent tasks. The relevant updated data may be transferred by the processor either before or after signalling task completion to the processors needing the updated data prior to commencing execution of the dependent tasks. Coherency may be ensured, however, by sending the data before the interrupt. When the end of task interrupt is received in a processor, its dependency table is consulted to determine those tasks dependent upon completion of the task which has just been signalled as completed, and task dependency signals indicative thereof are provided and stored in a current status list of a prerequisite table. The current status of all current prerequisites are compared to the complete prerequisites listed for all affected tasks and those tasks for which the comparison indicates that all prerequisites have been met are queued for execution in a selected order.

  2. MPSim: A Massively Parallel General Simulation Program for Materials

    NASA Astrophysics Data System (ADS)

    Iotov, Mihail; Gao, Guanghua; Vaidehi, Nagarajan; Cagin, Tahir; Goddard, William A., III

    1997-08-01

    In this talk, we describe a general purpose Massively Parallel Simulation (MPSim) program used for computational materials science and life sciences. We also will present scaling aspects of the program along with several case studies. The program incorporates highly efficient CMM method to accurately calculate the interactions. For studying bulk materials, the program uses the Reduced CMM to account for infinite range sums. The software embodies various advanced molecular dynamics algorithms, energy and structure optimization techniques with a set of analysis tools suitable for large scale structures. The applications using the program range amorphous polymers, liquid-polymer interfaces, large viruses, million atom clusters, surfaces, gas diffusion in polymers. Program is originally developed on KSR in an object oriented fashion and is ported to SGI-PC, and HP-Examplar. Message Passing version is originally implemented on Intel Paragon using NX, then MPI and later tested on Cray T3D, and IBM SP2 platforms.

  3. Amyloid oligomer structure characterization from simulations: A general method

    SciTech Connect

    Nguyen, Phuong H.; Li, Mai Suan

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  4. Magnetohydrodynamical simulations of a deep tidal disruption in general relativity

    NASA Astrophysics Data System (ADS)

    Sądowski, Aleksander; Tejeda, Emilio; Gafton, Emanuel; Rosswog, Stephan; Abarca, David

    2016-06-01

    We perform hydro- and magnetohydrodynamical general-relativistic simulations of a tidal disruption of a 0.1 M⊙ red dwarf approaching a 105 M⊙ non-rotating massive black hole on a close (impact parameter β = 10) elliptical (eccentricity e = 0.97) orbit. We track the debris self-interaction, circularization and the accompanying accretion through the black hole horizon. We find that the relativistic precession leads to the formation of a self-crossing shock. The dissipated kinetic energy heats up the incoming debris and efficiently generates a quasi-spherical outflow. The self-interaction is modulated because of the feedback exerted by the flow on itself. The debris quickly forms a thick, almost marginally bound disc that remains turbulent for many orbital periods. Initially, the accretion through the black hole horizon results from the self-interaction, while in the later stages it is dominated by the debris originally ejected in the shocked region, as it gradually falls back towards the hole. The effective viscosity in the debris disc stems from the original hydrodynamical turbulence, which dominates over the magnetic component. The radiative efficiency is very low because of low energetics of the gas crossing the horizon and large optical depth that results in photon trapping. Although the parameters of the simulated tidal disruption are probably not representative of most observed events, it is possible to extrapolate some of its properties towards more common configurations.

  5. Generalized Fluid System Simulation Program, Version 6.0

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; LeClair, A. C.; Moore, A.; Schallhorn, P. A.

    2013-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependant flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 25 demonstrated example problems.

  6. Generalized Fluid System Simulation Program (GFSSP) - Version 6

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; LeClair, Andre; Moore, Ric; Schallhorn, Paul

    2015-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, flow control valves and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. Users can introduce new physics, non-linear and time-dependent boundary conditions through user-subroutine.

  7. Generalized Fluid System Simulation Program, Version 5.0-Educational

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.

    2011-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the point, drag and click method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids and 21 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 12 demonstrated example problems.

  8. Alternative methods to predict actual evapotranspiration illustrate the importance of accounting for phenology - Part 2: The event driven phenology model

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.

    2011-05-01

    Evapotranspiration (ET) flux constitutes a major component of both the water and energy balances at the land surface. Among the many factors that control evapotranspiration, phenology poses a major source of uncertainty in attempts to predict ET. Contemporary approaches to ET modeling and monitoring frequently summarize the complexity of the seasonal development of vegetation cover into static phenological trajectories (or climatologies) that lack sensitivity to changing environmental conditions. The Event Driven Phenology Model (EDPM) offers an alternative, interactive approach to representing phenology. This study presents the results of an experiment designed to illustrate the differences in ET arising from various techniques used to mimic phenology in models of land surface processes. The experiment compares and contrasts two realizations of static phenologies derived from long-term satellite observations of the Normalized Difference Vegetation Index (NDVI) against canopy trajectories produced by the interactive EDPM trained on flux tower observations. The assessment was carried out through validation of predicted ET against records collected by flux tower instruments. The VegET model (Senay, 2008) was used as a framework to estimate daily actual evapotranspiration and supplied with seasonal canopy trajectories produced by the EDPM and traditional techniques. The interactive approach presented the following advantages over phenology modeled with static climatologies: (a) lower prediction bias in crops; (b) smaller root mean square error in daily ET - 0.5 mm per day on average; (c) stable level of errors throughout the season similar among different land cover types and locations; and (d) better estimation of season duration and total seasonal ET.

  9. Alternative methods to predict actual evapotranspiration illustrate the importance of accounting for phenology - Part 2: The event driven phenology model

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.

    2012-01-01

    Evapotranspiration (ET) flux constitutes a major component of both the water and energy balances at the land surface. Among the many factors that control evapotranspiration, phenology poses a major source of uncertainty in attempts to predict ET. Contemporary approaches to ET modeling and monitoring frequently summarize the complexity of the seasonal development of vegetation cover into static phenological trajectories (or climatologies) that lack sensitivity to changing environmental conditions. The Event Driven Phenology Model (EDPM) offers an alternative, interactive approach to representing phenology. This study presents the results of an experiment designed to illustrate the differences in ET arising from various techniques used to mimic phenology in models of land surface processes. The experiment compares and contrasts two realizations of static phenologies derived from long-term satellite observations of the Normalized Difference Vegetation Index (NDVI) against canopy trajectories produced by the interactive EDPM trained on flux tower observations. The assessment was carried out through validation of predicted ET against records collected by flux tower instruments. The VegET model (Senay, 2008) was used as a framework to estimate daily actual evapotranspiration and supplied with seasonal canopy trajectories produced by the EDPM and traditional techniques. The interactive approach presented the following advantages over phenology modeled with static climatologies: (a) lower prediction bias in crops; (b) smaller root mean square error in daily ET - 0.5 mm per day on average; (c) stable level of errors throughout the season similar among different land cover types and locations; and (d) better estimation of season duration and total seasonal ET.

  10. Hospitable archean climates simulated by a general circulation model.

    PubMed

    Wolf, E T; Toon, O B

    2013-07-01

    Evidence from ancient sediments indicates that liquid water and primitive life were present during the Archean despite the faint young Sun. To date, studies of Archean climate typically utilize simplified one-dimensional models that ignore clouds and ice. Here, we use an atmospheric general circulation model coupled to a mixed-layer ocean model to simulate the climate circa 2.8 billion years ago when the Sun was 20% dimmer than it is today. Surface properties are assumed to be equal to those of the present day, while ocean heat transport varies as a function of sea ice extent. Present climate is duplicated with 0.06 bar of CO2 or alternatively with 0.02 bar of CO2 and 0.001 bar of CH4. Hot Archean climates, as implied by some isotopic reconstructions of ancient marine cherts, are unattainable even in our warmest simulation having 0.2 bar of CO2 and 0.001 bar of CH4. However, cooler climates with significant polar ice, but still dominated by open ocean, can be maintained with modest greenhouse gas amounts, posing no contradiction with CO2 constraints deduced from paleosols or with practical limitations on CH4 due to the formation of optically thick organic hazes. Our results indicate that a weak version of the faint young Sun paradox, requiring only that some portion of the planet's surface maintain liquid water, may be resolved with moderate greenhouse gas inventories. Thus, hospitable late Archean climates are easily obtained in our climate model. PMID:23808659

  11. Extension of Generalized Fluid System Simulation Program's Fluid Property Database

    NASA Technical Reports Server (NTRS)

    Patel, Kishan

    2011-01-01

    This internship focused on the development of additional capabilities for the General Fluid Systems Simulation Program (GFSSP). GFSSP is a thermo-fluid code used to evaluate system performance by a finite volume-based network analysis method. The program was developed primarily to analyze the complex internal flow of propulsion systems and is capable of solving many problems related to thermodynamics and fluid mechanics. GFSSP is integrated with thermodynamic programs that provide fluid properties for sub-cooled, superheated, and saturation states. For fluids that are not included in the thermodynamic property program, look-up property tables can be provided. The look-up property tables of the current release version can only handle sub-cooled and superheated states. The primary purpose of the internship was to extend the look-up tables to handle saturated states. This involves a) generation of a property table using REFPROP, a thermodynamic property program that is widely used, and b) modifications of the Fortran source code to read in an additional property table containing saturation data for both saturated liquid and saturated vapor states. Also, a method was implemented to calculate the thermodynamic properties of user-fluids within the saturation region, given values of pressure and enthalpy. These additions required new code to be written, and older code had to be adjusted to accommodate the new capabilities. Ultimately, the changes will lead to the incorporation of this new capability in future versions of GFSSP. This paper describes the development and validation of the new capability.

  12. GLoBES: General Long Baseline Experiment Simulator

    NASA Astrophysics Data System (ADS)

    Huber, Patrick; Kopp, Joachim; Lindner, Manfred; Rolinec, Mark; Winter, Walter

    2007-09-01

    GLoBES (General Long Baseline Experiment Simulator) is a flexible software package to simulate neutrino oscillation long baseline and reactor experiments. On the one hand, it contains a comprehensive abstract experiment definition language (AEDL), which allows to describe most classes of long baseline experiments at an abstract level. On the other hand, it provides a C-library to process the experiment information in order to obtain oscillation probabilities, rate vectors, and Δχ-values. Currently, GLoBES is available for GNU/Linux. Since the source code is included, the port to other operating systems is in principle possible. GLoBES is an open source code that has previously been described in Computer Physics Communications 167 (2005) 195 and in Ref. [7]). The source code and a comprehensive User Manual for GLoBES v3.0.8 is now available from the CPC Program Library as described in the Program Summary below. The home of GLobES is http://www.mpi-hd.mpg.de/~globes/. Program summaryProgram title: GLoBES version 3.0.8 Catalogue identifier: ADZI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 145 295 No. of bytes in distributed program, including test data, etc.: 1 811 892 Distribution format: tar.gz Programming language: C Computer: GLoBES builds and installs on 32bit and 64bit Linux systems Operating system: 32bit or 64bit Linux RAM: Typically a few MBs Classification: 11.1, 11.7, 11.10 External routines: GSL—The GNU Scientific Library, www.gnu.org/software/gsl/ Nature of problem: Neutrino oscillations are now established as the leading flavor transition mechanism for neutrinos. In a long history of many experiments, see, e.g., [1], two oscillation frequencies have been identified: The fast atmospheric

  13. Sensitivity simulations of superparameterised convection in a general circulation model

    NASA Astrophysics Data System (ADS)

    Rybka, Harald; Tost, Holger

    2015-04-01

    Cloud Resolving Models (CRMs) covering a horizontal grid spacing from a few hundred meters up to a few kilometers have been used to explicitly resolve small-scale and mesoscale processes. Special attention has been paid to realistically represent cloud dynamics and cloud microphysics involving cloud droplets, ice crystals, graupel and aerosols. The entire variety of physical processes on the small-scale interacts with the larger-scale circulation and has to be parameterised on the coarse grid of a general circulation model (GCM). Since more than a decade an approach to connect these two types of models which act on different scales has been developed to resolve cloud processes and their interactions with the large-scale flow. The concept is to use an ensemble of CRM grid cells in a 2D or 3D configuration in each grid cell of the GCM to explicitly represent small-scale processes avoiding the use of convection and large-scale cloud parameterisations which are a major source for uncertainties regarding clouds. The idea is commonly known as superparameterisation or cloud-resolving convection parameterisation. This study presents different simulations of an adapted Earth System Model (ESM) connected to a CRM which acts as a superparameterisation. Simulations have been performed with the ECHAM/MESSy atmospheric chemistry (EMAC) model comparing conventional GCM runs (including convection and large-scale cloud parameterisations) with the improved superparameterised EMAC (SP-EMAC) modeling one year with prescribed sea surface temperatures and sea ice content. The sensitivity of atmospheric temperature, precipiation patterns, cloud amount and types is observed changing the embedded CRM represenation (orientation, width, no. of CRM cells, 2D vs. 3D). Additionally, we also evaluate the radiation balance with the new model configuration, and systematically analyse the impact of tunable parameters on the radiation budget and hydrological cycle. Furthermore, the subgrid

  14. Generalized Fluid System Simulation Program, Version 6.0

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; LeClair, A. C.; Moore, R.; Schallhorn, P. A.

    2016-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a general purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. Two thermodynamic property programs (GASP/WASP and GASPAK) provide required thermodynamic and thermophysical properties for 36 fluids: helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, parahydrogen, water, kerosene (RP-1), isobutene, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, R-11, R-12, R-22, R-32, R-123, R-124, R-125, R-134A, R-152A, nitrogen trifluoride, ammonia, hydrogen peroxide, and air. The program also provides the options of using any incompressible fluid with constant density and viscosity or ideal gas. The users can also supply property tables for fluids that are not in the library. Twenty-four different resistance/source options are provided for modeling momentum sources or sinks in the branches. These options include pipe flow, flow through a restriction, noncircular duct, pipe flow with entrance and/or exit losses, thin sharp orifice, thick orifice, square edge reduction, square edge expansion, rotating annular duct, rotating radial duct

  15. Evaluation of a coupled event-driven phenology and evapotranspiration model for croplands in the United States northern Great Plains

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.; Roy, D. P.; Adusei, B.; Hansen, M.; Senay, G.; Mocko, D. M.

    2013-06-01

    A new model coupling scheme with remote sensing data assimilation was developed for estimation of daily actual evapotranspiration (ET). The scheme consists of the VegET, a model to estimate ET from meteorological and water balance data, and an Event Driven Phenology Model (EDPM), an empirical crop specific model trained on multiple years of flux tower data transformed into six types of environmental forcings that are called "events" to emphasize their temporally discrete character, which has advantages for modeling multiple contingent influences. The EDPM in prognostic mode supplies seasonal trajectories of normalized difference vegetation index (NDVI); whereas in diagnostic mode, it can adjust the NDVI prediction with assimilated remotely sensed observations. The scheme was deployed within the croplands of the Northern Great Plains. The evaluation used 2007-2009 land surface forcing data from the North American Land Data Assimilation System and crop maps derived from remotely sensed data of NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). We compared the NDVI produced by the EDPM with NDVI data derived from the MODIS nadir bidirectional reflectance distribution function adjusted reflectance product. The EDPM performance in prognostic mode yielded a coefficient of determination (r2) of 0.8 ± 0.15and the root mean square error (RMSE) of 0.1 ± 0.035 across the entire study area. Retrospective correction of canopy attributes using assimilated MODIS NDVI values improved EDPM NDVI estimates, bringing the errors down to the average level of 0.1. The ET estimates produced by the coupled scheme were compared with the MODIS evapotranspiration product and with ET from NASA's Mosaic land surface model. The expected r2 = 0.7 ± 0.15 and RMSE = 11.2 ± 4 mm per 8 days achieved in earlier point-based validations were met in this study by the coupling scheme functioning in both prognostic and retrospective modes. Coupled model performance was diminished at the

  16. NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R827169)

    EPA Science Inventory

    General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...

  17. NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R825792)

    EPA Science Inventory

    General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...

  18. Generalized simulation technique for turbojet engine system analysis

    NASA Technical Reports Server (NTRS)

    Seldner, K.; Mihaloew, J. R.; Blaha, R. J.

    1972-01-01

    A nonlinear analog simulation of a turbojet engine was developed. The purpose of the study was to establish simulation techniques applicable to propulsion system dynamics and controls research. A schematic model was derived from a physical description of a J85-13 turbojet engine. Basic conservation equations were applied to each component along with their individual performance characteristics to derive a mathematical representation. The simulation was mechanized on an analog computer. The simulation was verified in both steady-state and dynamic modes by comparing analytical results with experimental data obtained from tests performed at the Lewis Research Center with a J85-13 engine. In addition, comparison was also made with performance data obtained from the engine manufacturer. The comparisons established the validity of the simulation technique.

  19. Simulated patients in general practice: a different look at the consultation.

    PubMed Central

    Rethans, J J; van Boven, C P

    1987-01-01

    To develop a better empirical basis for developing quality assessment in general practice three simulated patients made appointments with 48 general practitioners during actual surgery hours and collected facts about their performance. The simulated patients were indistinguishable from real patients and presented a standardised story of a symptomatic urinary tract infection. Two months later the same general practitioners received a written simulation about a patient who had the same urinary tract infection and were asked how they would handle this in real practice. Both results were scored against an existing consensus standard. The overall score for both methods did not show any substantial differences. A more differentiated analysis, however, showed that general practitioners performed significantly better with simulated patients. It also showed that general practitioners answering the written simulation performed significantly more unnecessary and superfluous actions. The results of this study show that the use of simulated patients seems to show the efficient performance of general practitioners in practice. PMID:3105753

  20. Seasonal changes in the atmospheric heat balance simulated by the GISS general circulation model

    NASA Technical Reports Server (NTRS)

    Stone, P. H.; Chow, S.; Helfand, H. M.; Quirk, W. J.; Somerville, R. C. J.

    1975-01-01

    Tests of the ability of numerical general circulation models to simulate the atmosphere have focussed so far on simulations of the January climatology. These models generally present boundary conditions such as sea surface temperature, but this does not prevent testing their ability to simulate seasonal changes in atmospheric processes that accompany presented seasonal changes in boundary conditions. Experiments to simulate changes in the zonally averaged heat balance are discussed since many simplified models of climatic processes are based solely on this balance.

  1. Predicting analysis time in events-driven clinical trials using accumulating time-to-event surrogate information.

    PubMed

    Wang, Jianming; Ke, Chunlei; Yu, Zhinuan; Fu, Lei; Dornseif, Bruce

    2016-05-01

    For clinical trials with time-to-event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre-specified number of deaths. Often, correlated surrogate information, such as time-to-progression (TTP) and progression-free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression-free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26689725

  2. General approach to boat simulation in virtual reality systems

    NASA Astrophysics Data System (ADS)

    Aranov, Vladislav Y.; Belyaev, Sergey Y.

    2002-02-01

    The paper is dedicated to real time simulation of sport boats, particularly a kayak and high-speed skimming boat, for training goals. This training is issue of the day, since kayaking and riding a high-speed skimming boat are both extreme sports. Participating in such types of competitions puts sportsmen into danger, particularly due to rapids, waterfalls, different water streams, and other obstacles. In order to make the simulation realistic, it is necessary to calculate data for at least 30 frames per second. These calculations may take not more than 5% CPU time, because very time-consuming 3D rendering process takes the rest - 95% CPU time. This paper describes an approach for creating minimal boat simulator models that satisfy the mentioned requirements. Besides, this approach can be used for other watercraft models of this kind.

  3. Verifying Algorithms for Autonomous Aircraft by Simulation Generalities and Example

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2010-01-01

    An open question in Air Traffic Management is what procedures can be validated by simulation where the simulation shows that the probability of undesirable events is below the required level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first examines the probabilistic interpretation of a typical requirement by a regulatory agency and computes the number of trials needed to establish the requirement at an equivalent confidence level. Since any simulation is likely to consider only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. The paper establishes a separation algorithm at the required confidence level where the aircraft operates under feedback control as is subject to perturbations. There is a discussion where it is shown that a scenario three of four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.

  4. Projectile General Motion in a Vacuum and a Spreadsheet Simulation

    ERIC Educational Resources Information Center

    Benacka, Jan

    2015-01-01

    This paper gives the solution and analysis of projectile motion in a vacuum if the launch and impact heights are not equal. Formulas for the maximum horizontal range and the corresponding angle are derived. An Excel application that simulates the motion is also presented, and the result of an experiment in which 38 secondary school students…

  5. New techniques for meshless flow simulation generalizing moving least squares

    NASA Astrophysics Data System (ADS)

    Trask, Nathaniel; Maxey, Martin

    2015-11-01

    While the Lagrangian nature of SPH offers unique flexibility in application problems, practitioners are forced to choose between compatibility in div/grad operators or low accuracy limiting the scope of the method. In this work, two new discretization frameworks are introduced that extend concepts from finite difference methods to a meshless context: one generalizing the high-order convergence of compact finite differences and another generalizing the enhanced stability of staggered marker-and-cell schemes. The discretizations are based on a novel polynomial reconstruction process that allows arbitrary order polynomial accuracy for both the differential operators and general boundary conditions while maintaining stability and computational efficiency. We demonstrate how the method fits neatly into the ISPH framework and offers a new degree of fidelity and accuracy in Lagrangian particle methods. Supported by the Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4), DOE Award DE-SC0009247.

  6. Large eddy simulation using the general circulation model ICON

    NASA Astrophysics Data System (ADS)

    Dipankar, Anurag; Stevens, Bjorn; Heinze, Rieke; Moseley, Christopher; Zängl, Günther; Giorgetta, Marco; Brdar, Slavko

    2015-09-01

    ICON (ICOsahedral Nonhydrostatic) is a unified modeling system for global numerical weather prediction (NWP) and climate studies. Validation of its dynamical core against a test suite for numerical weather forecasting has been recently published by Zängl et al. (2014). In the present work, an extension of ICON is presented that enables it to perform as a large eddy simulation (LES) model. The details of the implementation of the LES turbulence scheme in ICON are explained and test cases are performed to validate it against two standard LES models. Despite the limitations that ICON inherits from being a unified modeling system, it performs well in capturing the mean flow characteristics and the turbulent statistics of two simulated flow configurations—one being a dry convective boundary layer and the other a cumulus-topped planetary boundary layer.

  7. Projectile general motion in a vacuum and a spreadsheet simulation

    NASA Astrophysics Data System (ADS)

    Benacka, Jan

    2015-01-01

    This paper gives the solution and analysis of projectile motion in a vacuum if the launch and impact heights are not equal. Formulas for the maximum horizontal range and the corresponding angle are derived. An Excel application that simulates the motion is also presented, and the result of an experiment in which 38 secondary school students developed the application and investigated the system is given. A questionnaire survey was carried out to find out whether the students found the lessons interesting, learned new skills and wanted to model projectile motion in the air as an example of more realistic motion. The results are discussed.

  8. BIRD: A general interface for sparse distributed memory simulators

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.

  9. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  10. Plasma Jet Simulations Using a Generalized Ohm's Law

    NASA Technical Reports Server (NTRS)

    Ebersohn, Frans; Shebalin, John V.; Girimaji, Sharath S.

    2012-01-01

    Plasma jets are important physical phenomena in astrophysics and plasma propulsion devices. A currently proposed dual jet plasma propulsion device to be used for ISS experiments strongly resembles a coronal loop and further draws a parallel between these physical systems [1]. To study plasma jets we use numerical methods that solve the compressible MHD equations using the generalized Ohm s law [2]. Here, we will discuss the crucial underlying physics of these systems along with the numerical procedures we utilize to study them. Recent results from our numerical experiments will be presented and discussed.

  11. Optimal generalized multistep integration formulae for real-time digital simulation

    NASA Technical Reports Server (NTRS)

    Moerder, D. D.; Halyo, N.

    1985-01-01

    The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.

  12. GOOSE, a generalized object-oriented simulation environment

    SciTech Connect

    Ford, C.E.; March-Leuba, C. ); Guimaraes, L.; Ugolini, D. . Dept. of Nuclear Engineering)

    1991-01-01

    GOOSE, prototype software for a fully interactive, object-oriented simulation environment, is being developed as part of the Advanced Controls Program at Oak Ridge National Laboratory. Dynamic models may easily be constructed and tested; fully interactive capabilities allow the user to alter model parameters and complexity without recompilation. This environment provides access to powerful tools, such as numerical integration packages, graphical displays, and online help. Portability has been an important design goal; the system was written in Objective-C in order to run on a wide variety of computers and operating systems, including UNIX workstations and personal computers. A detailed library of nuclear reactor components, currently under development, will also be described. 5 refs., 4 figs.

  13. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    SciTech Connect

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D. Kühn, Oliver

    2015-06-28

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.

  14. General relativistic magnetohydrodynamical simulations of the jet in M 87

    NASA Astrophysics Data System (ADS)

    Mościbrodzka, Monika; Falcke, Heino; Shiokawa, Hotaka

    2016-02-01

    Context. The connection between black hole, accretion disk, and radio jet can be constrained best by fitting models to observations of nearby low-luminosity galactic nuclei, in particular the well-studied sources Sgr A* and M 87. There has been considerable progress in modeling the central engine of active galactic nuclei by an accreting supermassive black hole coupled to a relativistic plasma jet. However, can a single model be applied to a range of black hole masses and accretion rates? Aims: Here we want to compare the latest three-dimensional numerical model, originally developed for Sgr A* in the center of the Milky Way, to radio observations of the much more powerful and more massive black hole in M 87. Methods: We postprocess three-dimensional GRMHD models of a jet-producing radiatively inefficient accretion flow around a spinning black hole using relativistic radiative transfer and ray-tracing to produce model spectra and images. As a key new ingredient in these models, we allow the proton-electron coupling in these simulations depend on the magnetic properties of the plasma. Results: We find that the radio emission in M 87 is described well by a combination of a two-temperature accretion flow and a hot single-temperature jet. Most of the radio emission in our simulations comes from the jet sheath. The model fits the basic observed characteristics of the M 87 radio core: it is "edge-brightened", starts subluminally, has a flat spectrum, and increases in size with wavelength. The best fit model has a mass-accretion rate of Ṁ ~ 9 × 10-3M⊙ yr-1 and a total jet power of Pj ~ 1043 erg s-1. Emission at λ = 1.3 mm is produced by the counter-jet close to the event horizon. Its characteristic crescent shape surrounding the black hole shadow could be resolved by future millimeter-wave VLBI experiments. Conclusions: The model was successfully derived from one for the supermassive black hole in the center of the Milky Way by appropriately scaling mass and

  15. A general Kirchhoff approximation for echo simulation in ultrasonic NDT

    NASA Astrophysics Data System (ADS)

    Dorval, V.; Chatillon, S.; Lu, B.; Darmon, M.; Mahaut, S.

    2012-05-01

    The Kirchhoff approximation is commonly used for the modeling of echoes in ultrasonic NDE. It consists in locally approximating the illuminated surface by an infinite plane to compute elastic fields. A model based on this approximation is used in the CIVA software, developed at CEA LIST, to compute echoes from cracks and backwalls. In its current version, it is limited to stress-free surfaces. A new model using a more general formalism has been developed. It is based on reciprocity principles and is valid for any host and flaw materials (liquids, isotropic and anisotropic solids). Experimental validations confirm that this new model can be used for a wider range of applications than the previous one. A second part of this communication deals with the improvement of the Kirchhoff approximation in the aim of predicting diffraction echoes. It is based on an approach called refined Kirchhoff, which combines the Kirchhoff and Geometrical Theory of Diffraction (GTD) models. An illustration of this method for the case of a rigid obstacle in a fluid is given.

  16. Mars' Thermal Structure From The Lower To Middle Atmosphere: NASA Ames Mars General Circulation Simulations

    NASA Astrophysics Data System (ADS)

    Brecht, A. S.; Hollingsworth, J. L.; Kahre, M. A.

    2014-07-01

    The NASA Ames Mars General Ciculation Model (MGCM) has been extended to incorporate the middle atmosphere (~80 km to ~120 km). The extended MGCM simulated thermal structure will be compared to MRO-MCS and MEx-SPICAM observations.

  17. No Vent Tank Fill and Transfer Line Chilldown Analysis by Generalized Fluid System Simulation Program (GFSSP)

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    2013-01-01

    The purpose of the paper is to present the analytical capability developed to model no vent chill and fill of cryogenic tank to support CPST (Cryogenic Propellant Storage and Transfer) program. Generalized Fluid System Simulation Program (GFSSP) was adapted to simulate charge-holdvent method of Tank Chilldown. GFSSP models were developed to simulate chilldown of LH2 tank in K-site Test Facility and numerical predictions were compared with test data. The report also describes the modeling technique of simulating the chilldown of a cryogenic transfer line and GFSSP models were developed to simulate the chilldown of a long transfer line and compared with test data.

  18. Instructor and student pilots' subjective evaluation of a general aviation simulator with a terrain visual system

    NASA Technical Reports Server (NTRS)

    Kiteley, G. W.; Harris, R. L., Sr.

    1978-01-01

    Ten student pilots were given a 1 hour training session in the NASA Langley Research Center's General Aviation Simulator by a certified flight instructor and a follow-up flight evaluation was performed by the student's own flight instructor, who has also flown the simulator. The students and instructors generally felt that the simulator session had a positive effect on the students. They recommended that a simulator with a visual scene and a motion base would be useful in performing such maneuvers as: landing approaches, level flight, climbs, dives, turns, instrument work, and radio navigation, recommending that the simulator would be an efficient means of introducing the student to new maneuvers before doing them in flight. The students and instructors estimated that about 8 hours of simulator time could be profitably devoted to the private pilot training.

  19. A Comparative Analysis of General Case Simulation Instruction and Naturalistic Instruction.

    ERIC Educational Resources Information Center

    Domaracki, Joseph W.; Lyon, Steven R.

    1992-01-01

    This study, which involved training four young adults with moderate or severe mental retardation on housekeeping and janitorial work skills, found that simulation instruction based on general case methodology can be used to teach complex sequences, that naturalistic instruction seemed more efficient than simulation instruction, and that neither…

  20. General specifications for the development of a PC-based simulator of the NASA RECON system

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    The general specifications for the design and implementation of an IBM PC/XT-based simulator of the NASA RECON system, including record designs, file structure designs, command language analysis, program design issues, error recovery considerations, and usage monitoring facilities are discussed. Once implemented, such a simulator will be utilized to evaluate the effectiveness of simulated information system access in addition to actual system usage as part of the total educational programs being developed within the NASA contract.

  1. A General Simulator Using State Estimation for a Space Tug Navigation System. [computerized simulation, orbital position estimation and flight mechanics

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1975-01-01

    A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.

  2. General purpose simulation system of the data management system for space shuttle mission 18

    NASA Technical Reports Server (NTRS)

    Bengtson, N. M.; Mellichamp, J. M.; Crenshaw, J.

    1975-01-01

    The simulation program of the science and engineering data management system for the space shuttle is presented. The programming language used was General Purpose Simulation System V (OS). The data flow was modeled from its origin at the experiments or subsystems to transmission from the space shuttle. Mission 18 was the particular flight chosen for simulation. First, the general structure of the program is presented and the trade studies which were performed are identified. Inputs required to make runs are discussed followed by identification of the output statistics. Some areas for model modifications are pointed out. A detailed model configuration, program listing and results are included.

  3. Cloud-radiative effects on implied oceanic energy transport as simulated by atmospheric general circulation models

    NASA Technical Reports Server (NTRS)

    Gleckler, P. J.; Randall, D. A.; Boer, G.; Colman, R.; Dix, M.; Galin, V.; Helfand, M.; Kiehl, J.; Kitoh, A.; Lau, W.

    1995-01-01

    This paper summarizes the ocean surface net energy flux simulated by fifteen atmospheric general circulation models constrained by realistically-varying sea surface temperatures and sea ice as part of the Atmospheric Model Intercomparison Project. In general, the simulated energy fluxes are within the very large observational uncertainties. However, the annual mean oceanic meridional heat transport that would be required to balance the simulated surface fluxes is shown to be critically sensitive to the radiative effects of clouds, to the extent that even the sign of the Southern Hemisphere ocean heat transport can be affected by the errors in simulated cloud-radiation interactions. It is suggested that improved treatment of cloud radiative effects should help in the development of coupled atmosphere-ocean general circulation models.

  4. Generalized image charge solvation model for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    NASA Astrophysics Data System (ADS)

    Deng, Shaozhong; Xue, Changfeng; Baumketner, Andriy; Jacobs, Donald; Cai, Wei

    2013-07-01

    This paper extends the image charge solvation model (ICSM) [Y. Lin, A. Baumketner, S. Deng, Z. Xu, D. Jacobs, W. Cai, An image-based reaction field method for electrostatic interactions in molecular dynamics simulations of aqueous solutions, J. Chem. Phys. 131 (2009) 154103], a hybrid explicit/implicit method to treat electrostatic interactions in computer simulations of biomolecules formulated for spherical cavities, to prolate spheroidal and triaxial ellipsoidal cavities, designed to better accommodate non-spherical solutes in molecular dynamics (MD) simulations. In addition to the utilization of a general truncated octahedron as the MD simulation box, central to the proposed extension is an image approximation method to compute the reaction field for a point charge placed inside such a non-spherical cavity by using a single image charge located outside the cavity. The resulting generalized image charge solvation model (GICSM) is tested in simulations of liquid water, and the results are analyzed in comparison with those obtained from the ICSM simulations as a reference. We find that, for improved computational efficiency due to smaller simulation cells and consequently a less number of explicit solvent molecules, the generalized model can still faithfully reproduce known static and dynamic properties of liquid water at least for systems considered in the present paper, indicating its great potential to become an accurate but more efficient alternative to the ICSM when bio-macromolecules of irregular shapes are to be simulated.

  5. A Multi-mission Event-Driven Component-Based System for Support of Flight Software Development, ATLO, and Operations first used by the Mars Science Laboratory (MSL) Project

    NASA Technical Reports Server (NTRS)

    Dehghani, Navid; Tankenson, Michael

    2006-01-01

    This paper details an architectural description of the Mission Data Processing and Control System (MPCS), an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is developed based on a set of small reusable components, implemented in Java, each designed with a specific function and well-defined interfaces. An industry standard messaging bus is used to transfer information among system components. Components generate standard messages which are used to capture system information, as well as triggers to support the event-driven architecture of the system. Event-driven systems are highly desirable for processing high-rate telemetry (science and engineering) data, and for supporting automation for many mission operations processes.

  6. Tryout of a General Purpose Simulator in an Air National Guard Training Environment. Interim Report, June 1974-August 1974.

    ERIC Educational Resources Information Center

    Spangenberg, Ronald W.

    An evaluation of the usability, effectiveness, and acceptance in a job environment was performed on a general purpose simulator using a simulation of a radar system. General purpose simulators permit sharing of a programable capacity among simulations, thus providing economical hands-on training and training not usually economically available by…

  7. A GeneralizedWeight-Based Particle-In-Cell Simulation Scheme

    SciTech Connect

    W.W. Lee, T.G. Jenkins and S. Ethier

    2010-02-02

    A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution (δf) and the full distribution (full-F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage stage of the simulation, while retaining the flexibility of a full-F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.

  8. Simulation of the great plains low-level jet and associated clouds by general circulation models

    SciTech Connect

    Ghan, S.J.; Bian, X.; Corsetti, L.

    1996-07-01

    The low-level jet frequently observed in the Great Plains of the United States forms preferentially at night and apparently influences the timing of the thunderstorms in the region. The authors have found that both the European Centre for Medium-Range Weather Forecasts general circulation model and the National Center for Atmospheric Research Community Climate Model simulate the low-level jet rather well, although the spatial distribution of the jet frequency simulated by the two GCM`s differ considerably. Sensitivity experiments have demonstrated that the simulated low-level jet is surprisingly robust, with similar simulations at much coarser horizontal and vertical resolutions. However, both GCM`s fail to simulate the observed relationship between clouds and the low-level jet. The pronounced nocturnal maximum in thunderstorm frequency associated with the low-level jet is not simulated well by either GCM, with only weak evidence of a nocturnal maximum in the Great Plains. 36 refs., 20 figs.

  9. Evaluating the GPSS simulation model for the Viking batch computer system. [General Purpose Simulation System

    NASA Technical Reports Server (NTRS)

    Lee, J.-J.

    1976-01-01

    In anticipation of extremely heavy loading requirements by the Viking mission during the post-landing periods, a GPSS model has been developed for the purpose of simulating these requirements on the Viking batch computer system. This paper presents the effort pursued in evaluating such a model and results thereby obtained. The evaluation effort consists of selecting the evaluation approach, collecting actual test run data, making comparisons and deriving conclusions.

  10. General Relativistic Radiative Transfer and GeneralRelativistic MHD Simulations of Accretion and Outflows of Black Holes

    SciTech Connect

    Fuerst, Steven V.; Mizuno, Yosuke; Nishikawa, Ken-Ichi; Wu, Kinwah; /Mullard Space Sci. Lab.

    2007-01-05

    We calculate the emission from relativistic flows in black hole systems using a fully general relativistic radiative transfer formulation, with flow structures obtained by general relativistic magneto-hydrodynamic simulations. We consider thermal free-free emission and thermal synchrotron emission. Bright filament-like features protrude (visually) from the accretion disk surface, which are enhancements of synchrotron emission where the magnetic field roughly aligns with the line-of-sight in the co-moving frame. The features move back and forth as the accretion flow evolves, but their visibility and morphology are robust. We propose that variations and drifts of the features produce certain X-ray quasi-periodic oscillations (QPOs) observed in black-hole X-ray binaries.

  11. Simulating the Illuminance and Efficiency of the LEDs Used in General Household Lighting

    NASA Astrophysics Data System (ADS)

    Sun, Wen-Shing; Tsuei, Chih-Hsuan; Huang, Yi-Han

    The advantage of the LEDs illumination in general household lighting was proposed. High efficiency white LED as the light source was provided the energy saving illumination of the general household lightings. Different spaces in general household with different standards of average illuminance were designed and simulated by LightTools and DIALux software. The power consumptions and efficiency of traditional illuminated light sources and LED light source in lighting the household environment were analyzed and compared with each other at the same standard of average illuminance. Finally, it provided the advantage of using white LEDs in different spaces of the general household lighting.

  12. General purpose simulation system of the data management system for Space Shuttle mission 18

    NASA Technical Reports Server (NTRS)

    Bengtson, N. M.; Mellichamp, J. M.; Smith, O. C.

    1976-01-01

    A simulation program for the flow of data through the Data Management System of Spacelab and Space Shuttle was presented. The science, engineering, command and guidance, navigation and control data were included. The programming language used was General Purpose Simulation System V (OS). The science and engineering data flow was modeled from its origin at the experiments and subsystems to transmission from Space Shuttle. Command data flow was modeled from the point of reception onboard and from the CDMS Control Panel to the experiments and subsystems. The GN&C data flow model handled data between the General Purpose Computer and the experiments and subsystems. Mission 18 was the particular flight chosen for simulation. The general structure of the program is presented, followed by a user's manual. Input data required to make runs are discussed followed by identification of the output statistics. The appendices contain a detailed model configuration, program listing and results.

  13. Using a million cell simulation of the cerebellum: network scaling and task generality

    PubMed Central

    Li, Wen-Ke; Hausknecht, Matthew J.; Stone, Peter H.; Mauk, Michael D.

    2012-01-01

    Several factors combine to make it feasible to build computer simulations of the cerebellum and to test them in biologically realistic ways. These simulations can be used to help understand the computational contributions of various cerebellar components, including the relevance of the enormous number of neurons in the granule cell layer. In previous work we have used a simulation containing 12000 granule cells to develop new predictions and to account for various aspects of eyelid conditioning, a form of motor learning mediated by the cerebellum. Here we demonstrate the feasibility of scaling up this simulation to over one million granule cells using parallel graphics processing unit (GPU) technology. We observe that this increase in number of granule cells requires only twice the execution time of the smaller simulation on the GPU. We demonstrate that this simulation, like its smaller predecessor, can emulate certain basic features of conditioned eyelid responses, with a slight improvement in performance in one measure. We also use this simulation to examine the generality of the computation properties that we have derived from studying eyelid conditioning. We demonstrate that this scaled up simulation can learn a high level of performance in a classic machine learning task, the cart-pole balancing task. These results suggest that this parallel GPU technology can be used to build very large-scale simulations whose connectivity ratios match those of the real cerebellum and that these simulations can be used guide future studies on cerebellar mediated tasks and on machine learning problems. PMID:23200194

  14. General circulation model simulations of winter and summer sea-level pressures over North America

    USGS Publications Warehouse

    McCabe, G.J., Jr.; Legates, D.R.

    1992-01-01

    In this paper, observed sea-level pressures were used to evaluate winter and summer sea-level pressures over North America simulated by the Goddard Institute for Space Studies (GISS) and the Geophysical Fluid Dynamics Laboratory (GFDL) general circulation models. The objective of the study is to determine how similar the spatial and temporal distributions of GCM-simulated daily sea-level pressures over North America are to observed distributions. Overall, both models are better at reproducing observed within-season variance of winter and summer sea-level pressures than they are at simulating the magnitude of mean winter and summer sea-level pressures. -from Authors

  15. Experiments in monthly mean simulation of the atmosphere with a coarse-mesh general circulation model

    NASA Technical Reports Server (NTRS)

    Lutz, R. J.; Spar, J.

    1978-01-01

    The Hansen atmospheric model was used to compute five monthly forecasts (October 1976 through February 1977). The comparison is based on an energetics analysis, meridional and vertical profiles, error statistics, and prognostic and observed mean maps. The monthly mean model simulations suffer from several defects. There is, in general, no skill in the simulation of the monthly mean sea-level pressure field, and only marginal skill is indicated for the 850 mb temperatures and 500 mb heights. The coarse-mesh model appears to generate a less satisfactory monthly mean simulation than the finer mesh GISS model.

  16. Development and evaluation of a general aviation real world noise simulator

    NASA Technical Reports Server (NTRS)

    Galanter, E.; Popper, R.

    1980-01-01

    An acoustic playback system is described which realistically simulates the sounds experienced by the pilot of a general aviation aircraft during engine idle, take-off, climb, cruise, descent, and landing. The physical parameters of the signal as they appear in the simulator environment are compared to analogous parameters derived from signals recorded during actual flight operations. The acoustic parameters of the simulated and real signals during cruise conditions are within plus or minus two dB in third octave bands from 0.04 to 4 kHz. The overall A-weighted levels of the signals are within one dB of signals generated in the actual aircraft during equivalent maneuvers. Psychoacoustic evaluations of the simulator signal are compared with similar measurements based on transcriptions of actual aircraft signals. The subjective judgments made by human observers support the conclusion that the simulated sound closely approximates transcribed sounds of real aircraft.

  17. Development and evaluation of a general aviation real world noise simulator

    NASA Astrophysics Data System (ADS)

    Galanter, E.; Popper, R.

    1980-03-01

    An acoustic playback system is described which realistically simulates the sounds experienced by the pilot of a general aviation aircraft during engine idle, take-off, climb, cruise, descent, and landing. The physical parameters of the signal as they appear in the simulator environment are compared to analogous parameters derived from signals recorded during actual flight operations. The acoustic parameters of the simulated and real signals during cruise conditions are within plus or minus two dB in third octave bands from 0.04 to 4 kHz. The overall A-weighted levels of the signals are within one dB of signals generated in the actual aircraft during equivalent maneuvers. Psychoacoustic evaluations of the simulator signal are compared with similar measurements based on transcriptions of actual aircraft signals. The subjective judgments made by human observers support the conclusion that the simulated sound closely approximates transcribed sounds of real aircraft.

  18. The General-Use Nodal Network Solver (GUNNS) Modeling Package for Space Vehicle Flow System Simulation

    NASA Technical Reports Server (NTRS)

    Harvey, Jason; Moore, Michael

    2013-01-01

    The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.

  19. Estimating plant available water for general crop simulations in ALMANAC/APEX/EPIC/SWAT

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Process-based simulation models ALMANAC/APEX/EPIC/SWAT contain generalized plant growth subroutines to predict biomass and crop yield. Environmental constraints typically restrict plant growth and yield. Water stress is often an important limiting factor; it is calculated as the sum of water use f...

  20. Computer considerations for real time simulation of a generalized rotor model

    NASA Technical Reports Server (NTRS)

    Howe, R. M.; Fogarty, L. E.

    1977-01-01

    Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.

  1. Hybrid General Pattern Search and Simulated Annealing for Industrail Production Planning Problems

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Barsoum, N.

    2010-06-01

    In this paper, the hybridization of GPS (General Pattern Search) method and SA (Simulated Annealing) incorporated in the optimization process in order to look for the global optimal solution for the fitness function and decision variables as well as minimum computational CPU time. The real strength of SA approach been tested in this case study problem of industrial production planning. This is due to the great advantage of SA for being easily escaping from trapped in local minima by accepting up-hill move through a probabilistic procedure in the final stages of optimization process. Vasant [1] in his Ph. D thesis has provided 16 different techniques of heuristic and meta-heuristic in solving industrial production problems with non-linear cubic objective functions, eight decision variables and 29 constraints. In this paper, fuzzy technological problems have been solved using hybrid techniques of general pattern search and simulated annealing. The simulated and computational results are compared to other various evolutionary techniques.

  2. Serial Generalized Ensemble Simulations of Biomolecules with Self-Consistent Determination of Weights.

    PubMed

    Chelli, Riccardo; Signorini, Giorgio F

    2012-03-13

    Serial generalized ensemble simulations, such as simulated tempering, enhance phase space sampling through non-Boltzmann weighting protocols. The most critical aspect of these methods with respect to the popular replica exchange schemes is the difficulty in determining the weight factors which enter the criterion for accepting replica transitions between different ensembles. Recently, a method, called BAR-SGE, was proposed for estimating optimal weight factors by resorting to a self-consistent procedure applied during the simulation (J. Chem. Theory Comput.2010, 6, 1935-1950). Calculations on model systems have shown that BAR-SGE outperforms other approaches proposed for determining optimal weights in serial generalized ensemble simulations. However, extensive tests on real systems and on convergence features with respect to the replica exchange method are lacking. Here, we report on a thorough analysis of BAR-SGE by performing molecular dynamics simulations of a solvated alanine dipeptide, a system often used as a benchmark to test new computational methodologies, and comparing results to the replica exchange method. To this aim, we have supplemented the ORAC program, a FORTRAN suite for molecular dynamics simulations (J. Comput. Chem.2010, 31, 1106-1116), with several variants of the BAR-SGE technique. An illustration of the specific BAR-SGE algorithms implemented in the ORAC program is also provided. PMID:26593345

  3. Gyrokinetic particle simulation of microturbulence for general magnetic geometry and experimental profiles

    SciTech Connect

    Xiao, Yong; Holod, Ihor; Wang, Zhixuan; Lin, Zhihong; Zhang, Taige

    2015-02-15

    Developments in gyrokinetic particle simulation enable the gyrokinetic toroidal code (GTC) to simulate turbulent transport in tokamaks with realistic equilibrium profiles and plasma geometry, which is a critical step in the code–experiment validation process. These new developments include numerical equilibrium representation using B-splines, a new Poisson solver based on finite difference using field-aligned mesh and magnetic flux coordinates, a new zonal flow solver for general geometry, and improvements on the conventional four-point gyroaverage with nonuniform background marker loading. The gyrokinetic Poisson equation is solved in the perpendicular plane instead of the poloidal plane. Exploiting these new features, GTC is able to simulate a typical DIII-D discharge with experimental magnetic geometry and profiles. The simulated turbulent heat diffusivity and its radial profile show good agreement with other gyrokinetic codes. The newly developed nonuniform loading method provides a modified radial transport profile to that of the conventional uniform loading method.

  4. The Tropical Subseasonal Variability Simulated in the NASA GISS General Circulation Model

    NASA Technical Reports Server (NTRS)

    Kim, Daehyun; Sobel, Adam H.; DelGenio, Anthony D.; Chen, Yonghua; Camargo, Suzana J.; Yao, Mao-Sung; Kelley, Maxwell; Nazarenko, Larissa

    2012-01-01

    The tropical subseasonal variability simulated by the Goddard Institute for Space Studies general circulation model, Model E2, is examined. Several versions of Model E2 were developed with changes to the convective parameterization in order to improve the simulation of the Madden-Julian oscillation (MJO). When the convective scheme is modified to have a greater fractional entrainment rate, Model E2 is able to simulate MJO-like disturbances with proper spatial and temporal scales. Increasing the rate of rain reevaporation has additional positive impacts on the simulated MJO. The improvement in MJO simulation comes at the cost of increased biases in the mean state, consistent in structure and amplitude with those found in other GCMs when tuned to have a stronger MJO. By reinitializing a relatively poor-MJO version with restart files from a relatively better-MJO version, a series of 30-day integrations is constructed to examine the impacts of the parameterization changes on the organization of tropical convection. The poor-MJO version with smaller entrainment rate has a tendency to allow convection to be activated over a broader area and to reduce the contrast between dry and wet regimes so that tropical convection becomes less organized. Besides the MJO, the number of tropical-cyclone-like vortices simulated by the model is also affected by changes in the convection scheme. The model simulates a smaller number of such storms globally with a larger entrainment rate, while the number increases significantly with a greater rain reevaporation rate.

  5. Robustness of a high-resolution central scheme for hydrodynamic simulations in full general relativity

    NASA Astrophysics Data System (ADS)

    Shibata, Masaru; Font, José A.

    2005-08-01

    A recent paper by Lucas-Serrano et al. [A. Lucas-Serrano, J. A. Font, J. M. Ibánez, and J. M. Martí, Astron. Astrophys. 428, 703 (2004)] indicates that a high-resolution central (HRC) scheme is robust enough to yield accurate hydrodynamical simulations of special relativistic flows in the presence of ultrarelativistic speeds and strong shock waves. In this paper we apply this scheme in full general relativity (involving dynamical spacetimes), and assess its suitability by performing test simulations for oscillations of rapidly rotating neutron stars and merger of binary neutron stars. It is demonstrated that this HRC scheme can yield results as accurate as those by the so-called high-resolution shock-capturing (HRSC) schemes based upon Riemann solvers. Furthermore, the adopted HRC scheme has increased computational efficiency as it avoids the costly solution of Riemann problems and has practical advantages in the modeling of neutron star spacetimes. Namely, it allows simulations with stiff equations of state by successfully dealing with very low-density unphysical atmospheres. These facts not only suggest that such a HRC scheme may be a desirable tool for hydrodynamical simulations in general relativity, but also open the possibility to perform accurate magnetohydrodynamical simulations in curved dynamic spacetimes.

  6. The Early Jurassic climate: General circulation model simulations and the paleoclimate record

    SciTech Connect

    Chandler, M.A.

    1992-01-01

    This thesis presents the results of several general circulation model simulations of the Early Jurassic climate. The general circulation model employed was developed at the Goddard Institute for Space Studies while most paleoclimate data were provided by the Paleographic Atlas Project of the University of Chicago. The first chapter presents an Early Jurassic base simulation, which uses detailed reconstructions of paleogeography, vegetation, and sea surface temperature as boundary condition data sets. The resulting climatology reveals an Earth 5.2[degrees]C warmer, globally, than at present and a latitudinal temperature gradient dominated by high-latitude warming (+20[degrees]C) and little tropical change (+1[degrees]C). Comparisons show a good correlation between simulated results and paleoclimate data. Sensitivity experiments are used to investigate any model-data mismatches. Chapters two and three discuss two important aspects of Early Jurassic climate, continental aridity and global warming. Chapter two focuses on the hydrological capabilities of the general circulation model. The general circulation model's hydrologic diagnostics are evaluated, using the distribution of modern deserts and Early Jurassic paleoclimate data as validating constraints. A new method, based on general circulation model diagnostics and empirical formulae, is proposed for evaluating moisture balance. Chapter three investigates the cause of past global warming, concentrating on the role of increased ocean heat transport. Early Jurassic simulations show that increased ocean heat transports may have been a major factor in past climates. Increased ocean heat transports create latitudinal temperature gradients that closely approximate paleoclimate data and solve the problem of tropical overheating that results from elevated atmospheric carbon dioxide. Increased carbon dioxide cannot duplicate the Jurassic climate without also including increased ocean heat transports.

  7. Generalized math model for simulation of high-altitude balloon systems

    NASA Technical Reports Server (NTRS)

    Nigro, N. J.; Elkouh, A. F.; Hinton, D. E.; Yang, J. K.

    1985-01-01

    Balloon systems have proved to be a cost-effective means for conducting research experiments (e.g., infrared astronomy) in the earth's atmosphere. The purpose of this paper is to present a generalized mathematical model that can be used to simulate the motion of these systems once they have attained float altitude. The resulting form of the model is such that the pendulation and spin motions of the system are uncoupled and can be analyzed independently. The model is evaluated by comparing the simulation results with data obtained from an actual balloon system flown by NASA.

  8. General-relativistic simulations of binary black hole-neutron stars: Precursor electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Paschalidis, Vasileios; Etienne, Zachariah B.; Shapiro, Stuart L.

    2013-07-01

    We perform the first general relativistic force-free simulations of neutron star magnetospheres in orbit about spinning and nonspinning black holes. We find promising precursor electromagnetic emission: typical Poynting luminosities at, e.g., an orbital separation of r=6.6RNS are LEM˜6×1042(BNS,p/1013G)2(MNS/1.4M⊙)2erg/s. The Poynting flux peaks within a broad beam of ˜40° in the azimuthal direction and within ˜60° from the orbital plane, establishing a possible lighthouse effect. Our calculations, though preliminary, preview more detailed simulations of these systems that we plan to perform in the future.

  9. Cloud-radiative effects on implied oceanic energy transports as simulated by atmospheric general circulation models

    SciTech Connect

    Gleckler, P.J.; Randall, D.A.; Boer, G.

    1994-03-01

    This paper reports on energy fluxes across the surface of the ocean as simulated by fifteen atmospheric general circulation models in which ocean surface temperatures and sea-ice boundaries are prescribed. The oceanic meridional energy transport that would be required to balance these surface fluxes is computed, and is shown to be critically sensitive to the radiative effects of clouds, to the extent that even the sign of the Southern Hemisphere ocean energy transport can be affected by the errors in simulated cloud-radiation interactions.

  10. Evaluation of the Event Driven Phenology Model Coupled to the VegET Evapotranspiration Model Using Spatially Explicit Comparisons with Independent Reference Data

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.; Roy, D. P.; Senay, G. B.

    2011-12-01

    Vegetation growing cycles have a profound influence on regional evapotranspiration regimes. The recently developed Event Driven Phenology Model (EDPM) is an empirical crop-specific phenology model with data assimilation capabilities. Deployed in prognostic mode, the EDPM uses weather forcing data to produce daily estimates of phenology coefficients; and in diagnostic mode a one-dimensional Kalman filter is used to adjust EDPM estimates with satellite normalized difference vegetation index (NDVI) retrievals. In this study the EDPM is coupled to the VegET model that uses the Penman-Monteith equation to calculate reference ET and a water balance model for water stress coefficients to derive daily actual evapotranspiration. The coupled models were run for the croplands of the U.S. Northern Great Plains for three annual growing seasons to derive 8-day total actual evapotranspiration (ETa) estimates at 0.05° spatial resolution. The models were driven by North American Land Data Assimilation System (NLDAS) weather forcing and parameterized using annual MODIS cropland cover maps. Regional validation of the modeled NDVI and ETa were undertaken by comparison with MODIS NDVI and MODIS ETa products respectively. The modeled NDVI had a median coefficient of determination (r2) of 0.83 and a root mean square error (RMSE) of 0.15 within study area. With the EDPM deployed in both prognostic and diagnostic modes, the modeled ETa had r2 of 0.75 and RMSE of about 25% of season average ETa per observation period. With small computational effort these results yield comparable accuracy to those from computationally complex models of ETa which require more parameterization. The performance of the coupling scheme demonstrates that the modeling approach is a promising avenue for regional application studies.

  11. High frequency scattering by a smooth coated cylinder simulated with generalized impedance boundary conditions

    NASA Technical Reports Server (NTRS)

    Syed, Hasnain H.; Volakis, John L.

    1991-01-01

    Rigorous uniform geometrical theory of diffraction (UGTD) diffraction coefficients are presented for a coated convex cylinder simulated with generalized impedance boundary conditions. In particular, ray solutions are obtained which remain valid in the transition region and reduce uniformly to those in the deep lit and shadow regions. These involve new transition functions in place of the usual Fock-type integrals, characteristic to the impedance cylinder. A uniform asymptotic solution is also presented for observations in the close vicinity of the cylinder. As usual, the diffraction coefficients for the convex cylinder are obtained via a generalization of the corresponding ones for the circular cylinder.

  12. A General Relativistic Magnetohydrodynamics Simulation of Jet Formation with a State Transition

    NASA Technical Reports Server (NTRS)

    Nishikawa, K. I.; Richardson, G.; Koide, S.; Shibata, K.; Kudoh, T.; Hardee, P.; Fushman, G. J.

    2004-01-01

    We have performed the first fully three-dimensional general relativistic magnetohydrodynamic (GRMHD) simulation of jet formation from a thin accretion disk around a Schwarzschild black hole with a free-falling corona. The initial simulation results show that a bipolar jet (velocity sim 0.3c) is created as shown by previous two-dimensional axisymmetric simulations with mirror symmetry at the equator. The 3-D simulation ran over one hundred light-crossing time units which is considerably longer than the previous simulations. We show that the jet is initially formed as predicted due in part to magnetic pressure from the twisting the initially uniform magnetic field and from gas pressure associated with shock formation. At later times, the accretion disk becomes thick and the jet fades resulting in a wind that is ejected from the surface of the thickened (torus-like) disk. It should be noted that no streaming matter from a donor is included at the outer boundary in the simulation (an isolated black hole not binary black hole). The wind flows outwards with a wider angle than the initial jet. The widening of the jet is consistent with the outward moving shock wave. This evolution of jet-disk coupling suggests that the low/hard state of the jet system may switch to the high/soft state with a wind, as the accretion rate diminishes.

  13. Efficient classical simulation of matchgate circuits with generalized inputs and measurements

    NASA Astrophysics Data System (ADS)

    Brod, Daniel J.

    2016-06-01

    Matchgates are a restricted set of two-qubit gates known to be classically simulable under particular conditions. Specifically, if a circuit consists only of nearest-neighbor matchgates, an efficient classical simulation is possible if either (i) the input is a computational-basis state and the simulation requires computing probabilities of multiqubit outcomes (including also adaptive measurements) or (ii) if the input is an arbitrary product state, but the output of the circuit consists of a single qubit. In this paper we extend these results to show that matchgates are classically simulable even in the most general combination of these settings, namely, if the inputs are arbitrary product states, if the measurements are over arbitrarily many output qubits, and if adaptive measurements are allowed. This remains true even for arbitrary single-qubit measurements, albeit only in a weaker notion of classical simulation. These results make for an interesting contrast with other restricted models of computation, such as Clifford circuits or (bosonic) linear optics, where the complexity of simulation varies greatly under similar modifications.

  14. Simulator Evaluation of Runway Incursion Prevention Technology for General Aviation Operations

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Prinzel, Lawrence J., III

    2011-01-01

    A Runway Incursion Prevention System (RIPS) has been designed under previous research to enhance airport surface operations situation awareness and provide cockpit alerts of potential runway conflict, during transport aircraft category operations, in order to prevent runway incidents while also improving operations capability. This study investigated an adaptation of RIPS for low-end general aviation operations using a fixed-based simulator at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the study was to evaluate modified RIPS aircraft-based incursion detection algorithms and associated alerting and airport surface display concepts for low-end general aviation operations. This paper gives an overview of the system, simulation study, and test results.

  15. Well-posedness and generalized plane waves simulations of a 2D mode conversion model

    NASA Astrophysics Data System (ADS)

    Imbert-Gérard, Lise-Marie

    2015-12-01

    Certain types of electro-magnetic waves propagating in a plasma can undergo a mode conversion process. In magnetic confinement fusion, this phenomenon is very useful to heat the plasma, since it permits to transfer the heat at or near the plasma center. This work focuses on a mathematical model of wave propagation around the mode conversion region, from both theoretical and numerical points of view. It aims at developing, for a well-posed equation, specific basis functions to study a wave mode conversion process. These basis functions, called generalized plane waves, are intrinsically based on variable coefficients. As such, they are particularly adapted to the mode conversion problem. The design of generalized plane waves for the proposed model is described in detail. Their implementation within a discontinuous Galerkin method then provides numerical simulations of the process. These first 2D simulations for this model agree with qualitative aspects studied in previous works.

  16. Using MASS for AO simulations: a note on the comparison between MASS and Generalized SCIDAR techniques

    NASA Astrophysics Data System (ADS)

    Lombardi, G.; Sarazin, M.

    2016-01-01

    Recent studies on the comparison between the Multi Aperture Scintillation Sensor (MASS) and Generalized Scintillation Detection and Ranging (G-SCIDAR) profiler techniques have suggested significant discrepancies between the results delivered by the two instruments. MASS has been largely used in the recent site testing campaigns for the future next generation giant telescopes [i.e. the European Extremely Large Telescope, the Thirty Meter Telescope (TMT) and the Giant Magellan Telescope (GMT)] and is still used to monitor the conditions of world-class astronomical sites, as well as to deliver free atmosphere turbulence profiles to feed Adaptive Optics performance simulations. In this paper, we explore a different approach in the comparison between MASS and Generalized SCIDAR techniques with respect to previous studies, in order to provide a method for the use of the MASS data bases accumulated at European Southern Obseratory Paranal Observatory in Adaptive Optics simulations.

  17. SIMULATION OF GENERAL ANESTHESIA ON THE "SIMMAN 3G" AND ITS EFFICIENCY.

    PubMed

    Potapov, A F; Matveev, A S; Ignatiev, V G; Ivanova, A A; Aprosimov, L A

    2015-01-01

    In recent years in medical educational process new innovative technologies are widely used with computer simulation, providing the reality of medical intervations and procedures. Practice-training teaching with using of simulation allows to improve the efficiency of learning material at the expense of creating imaginary professional activity and leading barring material to practical activity. The arm of the investigation is evaluation of the students training efficiency at the Medical Institute on the topic "General Anesthesia with using a modern simulation "SimMan 3 G". The material of the investigation is the results, carried out on the basis of the Centre of Practical skills and medical virtual educational technologies (Simulation Centre) at the Medical Institute of NEFU by M.K. Ammosov. The Object of the investigation was made up by 55 students of the third (3) course of the Faculty of General Medicine of the Medical Institute of NEFU. The investigation was hold during practical trainings (April-May 2014) of the General Surgery Department on the topic "General Anesthesia". A simulation practical course "General Anesthesia" consisted of 12 academic hours. Practical training was carried out using instruments, equipments and facilities to install anesthesia on the SimMan 3G with shooting the process and further discussions of the results. The methods of the investigations were the appreciation of students background knowledge before and after practical training (by 5 points scale) and the analysis of the results. The results of the investigation showed that before the practical course only 23 students (41.8%) had dot positive marks: "Good"--7 students (12.7%) and "Satisfactory"--16 (29.1%) students. The rest 22 (58.2%) students had bad results. The practical trainings using real instruments, equipments and facilities with imitation of installation of preparations for introductory anesthesia, main analgesics and muscle relaxants showed a patients reaction on the

  18. Improved Carbohydrate Structure Generalization Scheme for (1)H and (13)C NMR Simulations.

    PubMed

    Kapaev, Roman R; Toukach, Philip V

    2015-07-21

    The improved Carbohydrate Structure Generalization Scheme has been developed for the simulation of (13)C and (1)H NMR spectra of oligo- and polysaccharides and their derivatives, including those containing noncarbohydrate constituents found in natural glycans. Besides adding the (1)H NMR calculations, we improved the accuracy and performance of prediction and optimized the mathematical model of the precision estimation. This new approach outperformed other methods of chemical shift simulation, including database-driven, neural net-based, and purely empirical methods and quantum-mechanical calculations at high theory levels. It can process structures with rarely occurring and noncarbohydrate constituents unsupported by the other methods. The algorithm is transparent to users and allows tracking used reference NMR data to original publications. It was implemented in the Glycan-Optimized Dual Empirical Spectrum Simulation (GODESS) web service, which is freely available at the platform of the Carbohydrate Structure Database (CSDB) project ( http://csdb.glycoscience.ru). PMID:26087011

  19. GOOSE 1. 4 -- Generalized Object-Oriented Simulation Environment user's manual

    SciTech Connect

    Nypaver, D.J. ); Abdalla, M.A. ); Guimaraes, L. , Sao Jose dos Campos, SP . Inst. de Estudos Avancados)

    1992-11-01

    The Generalized Object-Oriented Simulation Environment (GOOSE) is a new and innovative simulation tool that is being developed by the Simulation Group of the Advanced Controls Program at Oak Ridge National Laboratory. GOOSE is a fully interactive prototype software package that provides users with the capability of creating sophisticated mathematical models of physical systems. GOOSE uses an object-oriented approach to modeling and combines the concept of modularity (building a complex model easily from a collection of previously written components) with the additional features of allowing precompilation, optimization, and testing and validation of individual modules. Once a library of components has been defined and compiled, models can be built and modified without recompilation. This user's manual provides detailed descriptions of the structure and component features of GOOSE, along with a comprehensive example using a simplified model of a pressurized water reactor.

  20. GOOSE 1.4 -- Generalized Object-Oriented Simulation Environment user`s manual

    SciTech Connect

    Nypaver, D.J.; Abdalla, M.A.; Guimaraes, L.

    1992-11-01

    The Generalized Object-Oriented Simulation Environment (GOOSE) is a new and innovative simulation tool that is being developed by the Simulation Group of the Advanced Controls Program at Oak Ridge National Laboratory. GOOSE is a fully interactive prototype software package that provides users with the capability of creating sophisticated mathematical models of physical systems. GOOSE uses an object-oriented approach to modeling and combines the concept of modularity (building a complex model easily from a collection of previously written components) with the additional features of allowing precompilation, optimization, and testing and validation of individual modules. Once a library of components has been defined and compiled, models can be built and modified without recompilation. This user`s manual provides detailed descriptions of the structure and component features of GOOSE, along with a comprehensive example using a simplified model of a pressurized water reactor.

  1. Simulation and flight evaluation of a heads-up display for general aviation

    NASA Technical Reports Server (NTRS)

    Harris, R. L., Sr.

    1974-01-01

    A landing-site indicator (LASI) has been devised as a relatively simple heads-up display to show the pilot the magnitude and direction of the aircraft's velocity vector superimposed on the pilot's view of the landing area. A total of 160 landings were performed in a fixed-base simulation program by four pilots with and without the LASI display. These tests showed the display to be of beneficial use in making the approaches more consistent. Some inferences were also made that the physical workload would also be less with its use. The pilots generally agreed that the LASI, as represented in the simulation was a useful landing aid. Additional pilot comments from preliminary flight tests of a breadboard LASI display unit tend to confirm the simulator results.

  2. Simulation of charge breeding of rubidium using Monte Carlo charge breeding code and generalized ECRIS model

    SciTech Connect

    Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.

    2010-02-15

    A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.

  3. Simulation and flight evaluation of a head-up landing aid for general aviation

    NASA Technical Reports Server (NTRS)

    Harris, R. L., Sr.; Goode, M. W.; Yenni, K. R.

    1978-01-01

    A head-up general aviation landing aid called a landing site indicator (LASI) was tested in a fixed-base, visual simulator and in an airplane to determine the effectiveness of the LASI. The display, which had a simplified format and method of implementation, presented to the pilot in his line of sight through the windshield a graphic representation of the airplane's velocity vector. In each testing model (simulation of flight), each of 4 pilots made 20 landing approaches with the LASI and 20 approaches without it. The standard deviations of approach and touchdown parameters were considered an indication of pilot consistency. Use of the LASI improved consistency and also reduced elevator, aileron, and rudder control activity. Pilots' comments indicated that the LASI reduced work load. An appendix is included with a discussion of the simulator effectiveness for visual flight tasks.

  4. Towards Observational Astronomy of Jets in Active Galaxies from General Relativistic Magnetohydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Anantua, Richard; Roger Blandford, Jonathan McKinney and Alexander Tchekhovskoy

    2016-01-01

    We carry out the process of "observing" simulations of active galactic nuclei (AGN) with relativistic jets (hereafter called jet/accretion disk/black hole (JAB) systems) from ray tracing between image plane and source to convolving the resulting images with a point spread function. Images are generated at arbitrary observer angle relative to the black hole spin axis by implementing spatial and temporal interpolation of conserved magnetohydrodynamic flow quantities from a time series of output datablocks from fully general relativistic 3D simulations. We also describe the evolution of simulations of JAB systems' dynamical and kinematic variables, e.g., velocity shear and momentum density, respectively, and the variation of these variables with respect to observer polar and azimuthal angles. We produce, at frequencies from radio to optical, fixed observer time intensity and polarization maps using various plasma physics motivated prescriptions for the emissivity function of physical quantities from the simulation output, and analyze the corresponding light curves. Our hypothesis is that this approach reproduces observed features of JAB systems such as superluminal bulk flow projections and quasi-periodic oscillations in the light curves more closely than extant stylized analytical models, e.g., cannonball bulk flows. Moreover, our development of user-friendly, versatile C++ routines for processing images of state-of-the-art simulations of JAB systems may afford greater flexibility for observing a wide range of sources from high power BL-Lacs to low power quasars (possibly with the same simulation) without requiring years of observation using multiple telescopes. Advantages of observing simulations instead of observing astrophysical sources directly include: the absence of a diffraction limit, panoramic views of the same object and the ability to freely track features. Light travel time effects become significant for high Lorentz factor and small angles between

  5. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    SciTech Connect

    Dong, S.

    2015-02-15

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  6. Evaluation of the Event Driven Phenology Model Coupled with the VegET Evapotranspiration Model Through Comparisons with Reference Datasets in a Spatially Explicit Manner

    NASA Technical Reports Server (NTRS)

    Kovalskyy, V.; Henebry, G. M.; Adusei, B.; Hansen, M.; Roy, D. P.; Senay, G.; Mocko, D. M.

    2011-01-01

    A new model coupling scheme with remote sensing data assimilation was developed for estimation of daily actual evapotranspiration (ET). The scheme represents a mix of the VegET, a physically based model to estimate ET from a water balance, and an event driven phenology model (EDPM), where the EDPM is an empirically derived crop specific model capable of producing seasonal trajectories of canopy attributes. In this experiment, the scheme was deployed in a spatially explicit manner within the croplands of the Northern Great Plains. The evaluation was carried out using 2007-2009 land surface forcing data from the North American Land Data Assimilation System (NLDAS) and crop maps derived from remotely sensed data of NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). We compared the canopy parameters produced by the phenology model with normalized difference vegetation index (NDVI) data derived from the MODIS nadir bi-directional reflectance distribution function (BRDF) adjusted reflectance (NBAR) product. The expectations of the EDPM performance in prognostic mode were met, producing determination coefficient (r2) of 0.8 +/-.0.15. Model estimates of NDVI yielded root mean square error (RMSE) of 0.1 +/-.0.035 for the entire study area. Retrospective correction of canopy dynamics with MODIS NDVI brought the errors down to just below 10% of observed data range. The ET estimates produced by the coupled scheme were compared with ones from the MODIS land product suite. The expected r2=0.7 +/-.15 and RMSE = 11.2 +/-.4 mm per 8 days were met and even exceeded by the coupling scheme0 functioning in both prognostic and retrospective modes. Minor setbacks of the EDPM and VegET performance (r2 about 0.5 and additional 30 % of RMSR) were found on the peripheries of the study area and attributed to the insufficient EDPM training and to spatially varying accuracy of crop maps. Overall the experiment provided sufficient evidence of soundness and robustness of the EDPM and

  7. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out. PMID:16846629

  8. General relativistic magnetohydrodynamic simulations of accretion on to Sgr A*: how important are radiative losses?

    NASA Astrophysics Data System (ADS)

    Dibi, S.; Drappeau, S.; Fragile, P. C.; Markoff, S.; Dexter, J.

    2012-11-01

    We present general relativistic magnetohydrodynamic numerical simulations of the accretion flow around the supermassive black hole in the Galactic Centre, Sagittarius A* (Sgr A*). The simulations include for the first time radiative cooling processes (synchrotron, bremsstrahlung and inverse Compton) self-consistently in the dynamics, allowing us to test the common simplification of ignoring all cooling losses in the modelling of Sgr A*. We confirm that for Sgr A*, neglecting the cooling losses is a reasonable approximation if the Galactic Centre is accreting below ˜10-8 M⊙ yr-1, i.e. M⊙<10-7M⊙ Edd . However, above this limit, we show that radiative losses should be taken into account as significant differences appear in the dynamics and the resulting spectra when comparing simulations with and without cooling. This limit implies that most nearby low-luminosity active galactic nuclei are in the regime where cooling should be taken into account. We further make a parameter study of axisymmetric gas accretion around the supermassive black hole at the Galactic Centre. This approach allows us to investigate the physics of gas accretion in general, while confronting our results with the well-studied and observed source, Sgr A*, as a test case. We confirm that the nature of the accretion flow and outflow is strongly dependent on the initial geometry of the magnetic field. For example, we find it difficult, even with very high spins, to generate powerful outflows from discs threaded with multiple, separate poloidal field loops.

  9. Generalized nonequilibrium vertex correction method in coherent medium theory for quantum transport simulation of disordered nanoelectronics

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Ke, Youqi

    2016-07-01

    Electron transport properties of nanoelectronics can be significantly influenced by the inevitable and randomly distributed impurities/defects. For theoretical simulation of disordered nanoscale electronics, one is interested in both the configurationally averaged transport property and its statistical fluctuation that tells device-to-device variability induced by disorder. However, due to the lack of an effective method to do disorder averaging under the nonequilibrium condition, the important effects of disorders on electron transport remain largely unexplored or poorly understood. In this work, we report a general formalism of Green's function based nonequilibrium effective medium theory to calculate the disordered nanoelectronics. In this method, based on a generalized coherent potential approximation for the Keldysh nonequilibrium Green's function, we developed a generalized nonequilibrium vertex correction method to calculate the average of a two-Keldysh-Green's-function correlator. We obtain nine nonequilibrium vertex correction terms, as a complete family, to express the average of any two-Green's-function correlator and find they can be solved by a set of linear equations. As an important result, the averaged nonequilibrium density matrix, averaged current, disorder-induced current fluctuation, and averaged shot noise, which involve different two-Green's-function correlators, can all be derived and computed in an effective and unified way. To test the general applicability of this method, we applied it to compute the transmission coefficient and its fluctuation with a square-lattice tight-binding model and compared with the exact results and other previously proposed approximations. Our results show very good agreement with the exact results for a wide range of disorder concentrations and energies. In addition, to incorporate with density functional theory to realize first-principles quantum transport simulation, we have also derived a general form of

  10. Multiple processor accelerator for logic simulation

    SciTech Connect

    Catlin, G.M.

    1989-10-17

    This patent describes a computer system coupled to a plurality of users for implementing an event driven algorithm of each of the users. It comprises: a master processor coupled to the users for providing overall control of the computer system and executing the event driven algorithm of each of the users, the master processor further including a master memory; a unidirectional ring bus coupled to the master processor; a plurality of processor modules; an interprocessor bus coupled to the plurality of processors within the module for transferring the simulation data among the processors; and an interface means.

  11. Nonparametric simulation-based statistics for detecting linkage in general pedigrees

    SciTech Connect

    Davis, S.; Schroeder, M.; Weeks, D.E.; Goldin, L.R.

    1996-04-01

    We present here four nonparametric statistics for linkage analysis that test whether pairs of affected relatives share marker alleles more often than expected. These statistics are based on simulating the null distribution of a given statistic conditional on the unaffecteds` marker genotypes. Each statistic uses a different measure of marker sharing: the SimAPM statistic uses the simulation-based affected-pedigree-member measure based on identity-by-state (IBS) sharing. The SimKIN (kinship) measure is 1.0 for identity-by-descent (IBD) sharing, 0.0 for no IBD sharing, and the kinship coefficient when the IBD status is ambiguous. The simulation-based IBD (SimIBD) statistic uses a recursive algorithm to determine the probability of two affecteds sharing a specific allele IBD. The SimISO statistic is identical to SimIBD, except that it also measures marker similarity between unaffected pairs. We evaluated our statistics on data simulated under different two-locus disease models, comparing our results to those obtained with several other nonparametric statistics. Use of IBD information produces dramatic increases in power over the SimAPM method, which uses only IBS information. The power of our best statistic in most cases meets or exceeds the power of the other nonparametric statistics. Furthermore, our statistics perform comparisons between all affected relative pairs within general pedigrees and are not restricted to sib pairs or nuclear families. 32 refs., 5 figs., 6 tabs.

  12. Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born

    PubMed Central

    2012-01-01

    We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031

  13. General relativistic N-body simulations in the weak field limit

    NASA Astrophysics Data System (ADS)

    Adamek, Julian; Daverio, David; Durrer, Ruth; Kunz, Martin

    2013-11-01

    We develop a formalism for general relativistic N-body simulations in the weak field regime, suitable for cosmological applications. The problem is kept tractable by retaining the metric perturbations to first order, the first derivatives to second order, and second derivatives to all orders, thus taking into account the most important nonlinear effects of Einstein gravity. It is also expected that any significant “backreaction” should appear at this order. We show that the simulation scheme is feasible in practice by implementing it for a plane-symmetric situation and running two test cases, one with only cold dark matter, and one which also includes a cosmological constant. For these plane-symmetric situations, the deviations from the usual Newtonian N-body simulations remain small and, apart from a nontrivial correction to the background, can be accurately estimated within the Newtonian framework. The correction to the background scale factor, which is a genuine backreaction effect, can be robustly obtained with our algorithm. Our numerical approach is also naturally suited for the inclusion of extra relativistic fields and thus for dark energy or modified gravity simulations.

  14. Automated procedure for developing hybrid computer simulations of turbofan engines. Part 1: General description

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.; Bruton, W. M.

    1982-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.

  15. User's guide for a general purpose dam-break flood simulation model (K-634)

    USGS Publications Warehouse

    Land, Larry F.

    1981-01-01

    An existing computer program for simulating dam-break floods for forecast purposes has been modified with an emphasis on general purpose applications. The original model was formulated, developed and documented by the National Weather Service. This model is based on the complete flow equations and uses a nonlinear implicit finite-difference numerical method. The first phase of the simulation routes a flood wave through the reservoir and computes an outflow hydrograph which is the sum of the flow through the dam 's structures and the gradually developing breach. The second phase routes this outflow hydrograph through the stream which may be nonprismatic and have segments with subcritical or supercritical flow. The results are discharge and stage hydrographs at the dam as well as all of the computational nodes in the channel. From these hydrographs, peak discharge and stage profiles are tabulated. (USGS)

  16. 2D simulations based on general time-dependent reciprocal relation for LFEIT.

    PubMed

    Karadas, Mursel; Gencer, Nevzat Guneri

    2015-08-01

    Lorentz field electrical impedance tomography (LFEIT) is a newly proposed technique for imaging the conductivity of the tissues by measuring the electromagnetic induction under the ultrasound pressure field. In this paper, the theory and numerical simulations of the LFEIT are reported based on the general time dependent formulation. In LFEIT, a phased array ultrasound probe is used to introduce a current distribution inside a conductive body. The velocity current occurs, due to the movement of the conductive particles under a static magnetic field. In order to sense this current, a receiver coil configuration that surrounds the volume conductor is utilized. Finite Element Method (FEM) is used to carry out the simulations of LFEIT. It is shown that, LFEIT can be used to reconstruct the conductivity even up to 50% perturbation in the initial conductivity distribution. PMID:26736569

  17. GENASIS: General Astrophysical Simulation System. I. Refinable Mesh and Nonrelativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony

    2014-02-01

    GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.

  18. General Relativistic Magnetohydrodynamics Simulations of Tilted Black Hole Accretion Flows and Their Radiative Properties

    NASA Astrophysics Data System (ADS)

    Shiokawa, Hotaka; Gammie, C. F.; Dolence, J.; Noble, S. C.

    2013-01-01

    We perform global General Relativistic Magnetohydrodynamics (GRMHD) simulations of non-radiative, magnetized disks that are initially tilted with respect to the black hole's spin axis. We run the simulations with different size and tilt angle of the tori for 2 different resolutions. We also perform radiative transfer using Monte Carlo based code that includes synchrotron emission, absorption and Compton scattering to obtain spectral energy distribution and light curves. Similar work was done by Fragile et al. (2007) and Dexter & Fragile (2012) to model the super massive black hole SgrA* with tilted accretion disks. We compare our results of fully conservative hydrodynamic code and spectra that include X-ray, with their results.

  19. A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations

    SciTech Connect

    Rockway, J D; Champagne, N J; Sharpe, R M; Fasenfest, B

    2004-01-14

    Frequency domain techniques are popular for analyzing electromagnetics (EM) and coupled circuit-EM problems. These techniques, such as the method of moments (MoM) and the finite element method (FEM), are used to determine the response of the EM portion of the problem at a single frequency. Since only one frequency is solved at a time, it may take a long time to calculate the parameters for wideband devices. In this paper, a fast frequency sweep based on the Asymptotic Wave Expansion (AWE) method is developed and applied to generalized mixed circuit-EM problems. The AWE method, which was originally developed for lumped-load circuit simulations, has recently been shown to be effective at quasi-static and low frequency full-wave simulations. Here it is applied to a full-wave MoM solver, capable of solving for metals, dielectrics, and coupled circuit-EM problems.

  20. Dust Emissions, Transport, and Deposition Simulated with the NASA Finite-Volume General Circulation Model

    NASA Technical Reports Server (NTRS)

    Colarco, Peter; daSilva, Arlindo; Ginoux, Paul; Chin, Mian; Lin, S.-J.

    2003-01-01

    Mineral dust aerosols have radiative impacts on Earth's atmosphere, have been implicated in local and regional air quality issues, and have been identified as vectors for transporting disease pathogens and bringing mineral nutrients to terrestrial and oceanic ecosystems. We present for the first time dust simulations using online transport and meteorological analysis in the NASA Finite-Volume General Circulation Model (FVGCM). Our dust formulation follows the formulation in the offline Georgia Institute of Technology-Goddard Global Ozone Chemistry Aerosol Radiation and Transport Model (GOCART) using a topographical source for dust emissions. We compare results of the FVGCM simulations with GOCART, as well as with in situ and remotely sensed observations. Additionally, we estimate budgets of dust emission and transport into various regions.

  1. General-relativistic Simulations of Three-dimensional Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Ott, Christian D.; Abdikamalov, Ernazar; Mösta, Philipp; Haas, Roland; Drasco, Steve; O'Connor, Evan P.; Reisswig, Christian; Meakin, Casey A.; Schnetter, Erik

    2013-05-01

    We study the three-dimensional (3D) hydrodynamics of the post-core-bounce phase of the collapse of a 27 M ⊙ star and pay special attention to the development of the standing accretion shock instability (SASI) and neutrino-driven convection. To this end, we perform 3D general-relativistic simulations with a three-species neutrino leakage scheme. The leakage scheme captures the essential aspects of neutrino cooling, heating, and lepton number exchange as predicted by radiation-hydrodynamics simulations. The 27 M ⊙ progenitor was studied in 2D by Müller et al., who observed strong growth of the SASI while neutrino-driven convection was suppressed. In our 3D simulations, neutrino-driven convection grows from numerical perturbations imposed by our Cartesian grid. It becomes the dominant instability and leads to large-scale non-oscillatory deformations of the shock front. These will result in strongly aspherical explosions without the need for large-scale SASI shock oscillations. Low-l-mode SASI oscillations are present in our models, but saturate at small amplitudes that decrease with increasing neutrino heating and vigor of convection. Our results, in agreement with simpler 3D Newtonian simulations, suggest that once neutrino-driven convection is started, it is likely to become the dominant instability in 3D. Whether it is the primary instability after bounce will ultimately depend on the physical seed perturbations present in the cores of massive stars. The gravitational wave signal, which we extract and analyze for the first time from 3D general-relativistic models, will serve as an observational probe of the postbounce dynamics and, in combination with neutrinos, may allow us to determine the primary hydrodynamic instability.

  2. GENERAL-RELATIVISTIC SIMULATIONS OF THREE-DIMENSIONAL CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    Ott, Christian D.; Abdikamalov, Ernazar; Moesta, Philipp; Haas, Roland; Drasco, Steve; O'Connor, Evan P.; Reisswig, Christian; Meakin, Casey A.; Schnetter, Erik

    2013-05-10

    We study the three-dimensional (3D) hydrodynamics of the post-core-bounce phase of the collapse of a 27 M{sub Sun} star and pay special attention to the development of the standing accretion shock instability (SASI) and neutrino-driven convection. To this end, we perform 3D general-relativistic simulations with a three-species neutrino leakage scheme. The leakage scheme captures the essential aspects of neutrino cooling, heating, and lepton number exchange as predicted by radiation-hydrodynamics simulations. The 27 M{sub Sun} progenitor was studied in 2D by Mueller et al., who observed strong growth of the SASI while neutrino-driven convection was suppressed. In our 3D simulations, neutrino-driven convection grows from numerical perturbations imposed by our Cartesian grid. It becomes the dominant instability and leads to large-scale non-oscillatory deformations of the shock front. These will result in strongly aspherical explosions without the need for large-scale SASI shock oscillations. Low-l-mode SASI oscillations are present in our models, but saturate at small amplitudes that decrease with increasing neutrino heating and vigor of convection. Our results, in agreement with simpler 3D Newtonian simulations, suggest that once neutrino-driven convection is started, it is likely to become the dominant instability in 3D. Whether it is the primary instability after bounce will ultimately depend on the physical seed perturbations present in the cores of massive stars. The gravitational wave signal, which we extract and analyze for the first time from 3D general-relativistic models, will serve as an observational probe of the postbounce dynamics and, in combination with neutrinos, may allow us to determine the primary hydrodynamic instability.

  3. SIMPSON: A General Simulation Program for Solid-State NMR Spectroscopy

    NASA Astrophysics Data System (ADS)

    Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.

    2000-12-01

    A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tcl scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple 1D experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.

  4. SIMPSON: A general simulation program for solid-state NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.

    2011-12-01

    A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tel scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple ID experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.

  5. Mars atmospheric dynamics as simulated by the NASA AMES General Circulation Model. II - Transient baroclinic eddies

    NASA Astrophysics Data System (ADS)

    Barnes, J. R.; Pollack, J. B.; Haberle, R. M.; Leovy, C. B.; Zurek, R. W.; Lee, H.; Schaeffer, J.

    1993-02-01

    A large set of experiments performed with the NASA Ames Mars General Circulation Model is analyzed to determine the properties, structure, and dynamics of the simulated transient baroclinic eddies. There is strong transient baroclinic eddy activity in the extratropics of the Northern Hemisphere during the northern autumn, winter, and spring seasons. The eddy activity remains strong for very large dust loadings, though it shifts northward. The eastward propagating eddies are characterized by zonal wavenumbers of 1-4 and periods of about 2-10 days. The properties of the GCM baroclinic eddies in the northern extratropics are compared in detail with analogous properties inferred from Viking Lander meteorology observations.

  6. An in-flight simulation of lateral control nonlinearities. [for general aviation aircraft

    NASA Technical Reports Server (NTRS)

    Ellis, D. R.; Tilak, N. W.

    1975-01-01

    An in-flight simulation program was conducted to explore, in a generalized way, the influence of spoiler-type roll-control nonlinearities on handling qualities. The roll responses studied typically featured a dead zone or very small effectiveness for small control inputs, a very high effectiveness for mid-range deflections, and low effectiveness again for large inputs. A linear force gradient with no detectable breakout force was provided. Given otherwise good handling characteristics, it was found that moderate nonlinearities of the types tested might yield acceptable roll control, but the best level of handling qualities is obtained with linear, aileron-like control.

  7. Terahertz spectroscopic polarimetry of generalized anisotropic media composed of Archimedean spiral arrays: Experiments and simulations

    NASA Astrophysics Data System (ADS)

    Aschaffenburg, Daniel J.; Williams, Michael R. C.; Schmuttenmaer, Charles A.

    2016-05-01

    Terahertz time-domain spectroscopic polarimetry has been used to measure the polarization state of all spectral components in a broadband THz pulse upon transmission through generalized anisotropic media consisting of two-dimensional arrays of lithographically defined Archimedean spirals. The technique allows a full determination of the frequency-dependent, complex-valued transmission matrix and eigenpolarizations of the spiral arrays. Measurements were made on a series of spiral array orientations. The frequency-dependent transmission matrix elements as well as the eigenpolarizations were determined, and the eigenpolarizations were found be to elliptically corotating, as expected from their symmetry. Numerical simulations are in quantitative agreement with measured spectra.

  8. Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing.

    PubMed

    Menin, O H; Martinez, A S; Costa, A M

    2016-05-01

    A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present. PMID:26943902

  9. General purpose computational tools for simulation and analysis of medium-energy backscattering spectra

    NASA Astrophysics Data System (ADS)

    Weller, Robert A.

    1999-06-01

    This paper describes a suite of computational tools for general-purpose ion-solid calculations, which has been implemented in the platform-independent computational environment Mathematica®. Although originally developed for medium energy work (beam energies < 300 keV), they are suitable for general, classical, non-relativistic calculations. Routines are available for stopping power, Rutherford and Lenz-Jensen (screened) cross sections, sputtering yields, small-angle multiple scattering, and back-scattering-spectrum simulation and analysis. Also included are a full range of supporting functions, as well as easily accessible atomic mass and other data on all the stable isotopes in the periodic table. The functions use common calling protocols, recognize elements and isotopes by symbolic names and, wherever possible, return symbolic results for symbolic inputs, thereby facilitating further computation. A new paradigm for the representation of backscattering spectra is introduced.

  10. Development of generalized mapping tools to improve implementation of data driven computer simulations (04-ERD-083)

    SciTech Connect

    Ramirez, A; Pasyanos, M; Franz, G A

    2004-09-17

    The Stochastic Engine (SE) is a data driven computer simulation tool for predicting the characteristics of complex systems. The SE integrates accurate simulators with the Monte Carlo Markov Chain (MCMC) approach (a stochastic inverse technique) to identify alternative models that are consistent with available data and ranks these alternatives according to their probabilities. Implementation of the SE is currently cumbersome owing to the need to customize the pre-processing and processing steps that are required for a specific application. This project widens the applicability of the Stochastic Engine by generalizing some aspects of the method (i.e. model-to-data transformation types, configuration, model representation). We have generalized several of the transformations that are necessary to match the observations to proposed models. These transformations are sufficiently general not to pertain to any single application. This approach provides a framework that increases the efficiency of the SE implementation. The overall goal is to reduce response time and make the approach as ''plug-and-play'' as possible, and will result in the rapid accumulation of new data types for a host of both earth science and non-earth science problems. When adapting the SE approach to a specific application, there are various pre-processing and processing steps that are typically needed to run a specific problem. Many of these steps are common to a wide variety of specific applications. Here we list and describe several data transformations that are common to a variety of subsurface inverse problems. A subset of these steps have been developed in a generalized form such that they could be used with little or no modifications in a wide variety of specific applications. This work was funded by the LDRD Program (tracking number 04-ERD-083).

  11. The impact of a realistic vertical dust distribution on the simulation of the Martian General Circulation

    NASA Astrophysics Data System (ADS)

    Guzewich, Scott D.; Toigo, Anthony D.; Richardson, Mark I.; Newman, Claire E.; Talaat, Elsayed R.; Waugh, Darryn W.; McConnochie, Timothy H.

    2013-05-01

    Limb-scanning observations with the Mars Climate Sounder and Thermal Emission Spectrometer (TES) have identified discrete layers of enhanced dust opacity well above the boundary layer and a mean vertical structure of dust opacity very different from the expectation of well-mixed dust in the lowest 1-2 scale heights. To assess the impact of this vertical dust opacity profile on atmospheric properties, we developed a TES limb-scan observation-based three-dimensional and time-evolving dust climatology for use in forcing general circulation models (GCMs). We use this to force the MarsWRF GCM and compare with simulations that use a well-mixed (Conrath-ν) vertical dust profile and Mars Climate Database version 4 (MCD) horizontal distribution dust opacity forcing function. We find that simulated temperatures using the TES-derived forcing yield a 1.18 standard deviation closer match to TES temperature retrievals than a MarsWRF simulation using MCD forcing. The climatological forcing yields significant changes to many large-scale features of the simulated atmosphere. Notably the high-latitude westerly jet speeds are 10-20 m/s higher, polar warming collar temperatures are 20-30 K warmer near northern winter solstice and tilted more strongly poleward, the middle and lower atmospheric meridional circulations are partially decoupled, the migrating diurnal tide exhibits destructive interference and is weakened by 50% outside of equinox, and the southern hemisphere wave number 1 stationary wave is strengthened by up to 4 K (45%). We find the vertical dust distribution is an important factor for Martian lower and middle atmospheric thermal structure and circulation that cannot be neglected in analysis and simulation of the Martian atmosphere.

  12. A Novel Approach for Modeling Chemical Reaction in Generalized Fluid System Simulation Program

    NASA Technical Reports Server (NTRS)

    Sozen, Mehmet; Majumdar, Alok

    2002-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a computer code developed at NASA Marshall Space Flight Center for analyzing steady state and transient flow rates, pressures, temperatures, and concentrations in a complex flow network. The code, which performs system level simulation, can handle compressible and incompressible flows as well as phase change and mixture thermodynamics. Thermodynamic and thermophysical property programs, GASP, WASP and GASPAK provide the necessary data for fluids such as helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, water, a hydrogen, isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, several refrigerants, nitrogen trifluoride and ammonia. The program which was developed out of need for an easy to use system level simulation tool for complex flow networks, has been used for the following purposes to name a few: Space Shuttle Main Engine (SSME) High Pressure Oxidizer Turbopump Secondary Flow Circuits, Axial Thrust Balance of the Fastrac Engine Turbopump, Pressurized Propellant Feed System for the Propulsion Test Article at Stennis Space Center, X-34 Main Propulsion System, X-33 Reaction Control System and Thermal Protection System, and International Space Station Environmental Control and Life Support System design. There has been an increasing demand for implementing a combustion simulation capability into GFSSP in order to increase its system level simulation capability of a liquid rocket propulsion system starting from the propellant tanks up to the thruster nozzle for spacecraft as well as launch vehicles. The present work was undertaken for addressing this need. The chemical equilibrium equations derived from the second law of thermodynamics and the energy conservation equation derived from the first law of thermodynamics are solved simultaneously by a Newton-Raphson method. The numerical scheme was implemented as a User

  13. Simulation of reactive nanolaminates using reduced models: III. Ingredients for a general multidimensional formulation

    SciTech Connect

    Salloum, Maher; Knio, Omar M.

    2010-06-15

    A transient multidimensional reduced model is constructed for the simulation of reaction fronts in Ni/Al multilayers. The formulation is based on the generalization of earlier methodologies developed for quasi-1D axial and normal propagation, specifically by adapting the reduced formalism for atomic mixing and heat release. This approach enables us to focus on resolving the thermal front structure, whose evolution is governed by thermal diffusion and heat release. A mixed integration scheme is used for this purpose, combining an extended-stability, Runge-Kutta-Chebychev (RKC) integration of the diffusion term with exact treatment of the chemical source term. Thus, a detailed description of atomic mixing within individual layers is avoided, which enables transient modeling of the reduced equations of motion in multiple dimensions. Two-dimensional simulations are first conducted of front propagation in composites combining two bilayer periods. Results are compared with the experimental measurements of Knepper et al., which reveal that the reaction velocity can depend significantly on layering frequency. The comparison indicates that, using a concentration-dependent conductivity model, the transient 2D computations can reasonably reproduce the experimental behavior. Additional tests are performed based on 3D computations of surface initiated reactions. Comparison of computed predictions with laser ignition measurements indicates that the computations provide reasonable estimates of ignition thresholds. A detailed discussion is finally provided of potential generalizations and associated hurdles. (author)

  14. Variable-resolution frameworks for the simulation of tropical cyclones in global atmospheric general circulation models

    NASA Astrophysics Data System (ADS)

    Zarzycki, Colin

    The ability of atmospheric General Circulation Models (GCMs) to resolve tropical cyclones in the climate system has traditionally been difficult. The challenges include adequately capturing storms which are small in size relative to model grids and the fact that key thermodynamic processes require a significant level of parameterization. At traditional GCM grid spacings of 50-300 km tropical cyclones are severely under-resolved, if not completely unresolved. This thesis explores a variable-resolution global model approach that allows for high spatial resolutions in areas of interest, such as low-latitude ocean basins where tropical cyclogenesis occurs. Such GCM designs with multi-resolution meshes serve to bridge the gap between globally-uniform grids and limited area models and have the potential to become a future tool for regional climate assessments. A statically-nested, variable-resolution option has recently been introduced into the Department of Energy/National Center for Atmospheric Research (DoE/NCAR) Community Atmosphere Model's (CAM) Spectral Element (SE) dynamical core. Using an idealized tropical cyclone test, variable-resolution meshes are shown to significantly lessen computational requirements in regional GCM studies. Furthermore, the tropical cyclone simulations are free of spurious numerical errors at the resolution interfaces. Utilizing aquaplanet simulations as an intermediate test between idealized simulations and fully-coupled climate model runs, climate statistics within refined patches are shown to be well-matched to globally-uniform simulations of the same grid spacing. Facets of the CAM version 4 (CAM4) subgrid physical parameterizations are likely too scale sensitive for variable-resolution applications, but the newer CAM5 package is vastly improved in performance at multiple grid spacings. Multi-decadal simulations following 'Atmospheric Model Intercomparison Project' protocols have been conducted with variable-resolution grids. Climate

  15. Numerical simulation of the general circulation of the atmosphere of Titan.

    PubMed

    Hourdin, F; Talagrand, O; Sadourny, R; Courtin, R; Gautier, D; McKay, C P

    1995-10-01

    The atmospheric circulation of Titan is investigated with a general circulation model. The representation of the large-scale dynamics is based on a grid point model developed and used at Laboratoire de Météorologie Dynamique for climate studies. The code also includes an accurate representation of radiative heating and cooling by molecular gases and haze as well as a parametrization of the vertical turbulent mixing of momentum and potential temperature. Long-term simulations of the atmospheric circulation are presented. Starting from a state of rest, the model spontaneously produces a strong superrotation with prograde equatorial winds (i.e., in the same sense as the assumed rotation of the solid body) increasing from the surface to reach 100 m sec-1 near the 1-mbar pressure level. Those equatorial winds are in very good agreement with some indirect observations, especially those of the 1989 occultation of Star 28-Sgr by Titan. On the other hand, the model simulates latitudinal temperature contrasts in the stratosphere that are significantly weaker than those observed by Voyager 1 which, we suggest, may be partly due to the nonrepresentation of the spatial and temporal variations of the abundances of molecular species and haze. We present diagnostics of the simulated atmospheric circulation underlying the importance of the seasonal cycle and a tentative explanation for the creation and maintenance of the atmospheric superrotation based on a careful angular momentum budget. PMID:11538593

  16. General relativistic simulations of slowly rotating, magnetized stars: A perturbative metric approach

    NASA Astrophysics Data System (ADS)

    Etienne, Zachariah; Liu, Y. T.; Shapiro, S.

    2007-04-01

    Understanding the role general relativistic magnetohydrodynamic (GRMHD) effects play in the evolution of nascent neutron stars is a problem at the forefront of theoretical astrophysics. To this end, we performed long-term (˜10^4 M) axisymmetric simulations of differentially rotating magnetized neutron stars in the slow-rotation, weak magnetic field limit using a dynamically updated perturbative metric evolution technique. Although the perturbative metric approach yields results comparable to those obtained via a nonperturbative (BSSN) metric evolution technique, simulations performed with the perturbative metric solver require about 1/4 the computational resources at a given resolution. This computational efficiency enabled us to observe and analyze the effects of magnetic braking and the magnetorotational instability (MRI) at very high resolution. Our GRMHD simulations demonstrate that (1) MRI is not observed unless the estimated fastest-growing mode wavelength is resolved by >˜ 10 gridpoints; (2) as resolution is improved, the MRI growth rate converges, but due to the small-scale nature of MRI-induced turbulence, the maximum growth amplitude increases, but does not exhibit convergence, even at the highest resolution; and (3) independent of resolution, magnetic braking drives the star toward uniform rotation as energy is sapped from differential rotation by winding magnetic fields.

  17. Radioscience simulations in general relativity and in alternative theories of gravity

    NASA Astrophysics Data System (ADS)

    Hees, A.; Lamine, B.; Reynaud, S.; Jaekel, M.-T.; Le Poncin-Lafitte, C.; Lainey, V.; Füzfa, A.; Courty, J.-M.; Dehant, V.; Wolf, P.

    2012-12-01

    This paper deals with tests of general relativity (GR) in the Solar System using tracking observables from planetary spacecraft. We present a new software that simulates the Range and Doppler signals resulting from a given spacetime metric. This flexible approach allows one to perform simulations in GR as well as in alternative metric theories of gravity. The outputs of this software provide templates of anomalous residuals that should show up in real data if the underlying theory of gravity is not GR. Those templates can be used to give a rough estimation of constraints on additional parameters entering alternative theory of gravity and also signatures that can be searched for in data from past or future space missions aiming at testing gravitational laws in the Solar System. As an application of the potentiality of this software, we present some simulations performed for Cassini-like mission in post-Einsteinian gravity and in the context of MOND external field effect. We derive signatures arising from these alternative theories of gravity and estimate expected amplitudes of the anomalous residuals.

  18. Towards a General Turbulence Model for Planetary Boundary Layers Based on Direct Statistical Simulation

    NASA Astrophysics Data System (ADS)

    Marston, Brad; Fox-Kemper, Baylor; Skitka, Joe

    Sub-grid turbulence models for planetary boundary layers are typically constructed additively, starting with local flow properties and including non-local (KPP) or higher order (Mellor-Yamada) parameters until a desired level of predictive capacity is achieved or a manageable threshold of complexity is surpassed. Such approaches are necessarily limited in general circumstances, like global circulation models, by their being optimized for particular flow phenomena. By using direct statistical simulation (DSS) that is based upon expansion in equal-time cumulants we offer the prospect of a turbulence model and an investigative tool that is equally applicable to all flow types and able to take advantage of the wealth of nonlocal information in any flow. We investigate the feasibility of a second-order closure (CE2) by performing simulations of the ocean boundary layer in a quasi-linear approximation for which CE2 is exact. As oceanographic examples, wind-driven Langmuir turbulence and thermal convection are studied by comparison of the statistics of quasi-linear and fully nonlinear simulations. We also characterize the computational advantages and physical uncertainties of CE2 defined on a reduced basis determined via proper orthogonal decomposition (POD) of the flow fields. Supported in part by NSF DMR-1306806.

  19. Martian atmospheric gravity waves simulated by a high-resolution general circulation model

    NASA Astrophysics Data System (ADS)

    Kuroda, Takeshi; Yiǧit, Erdal; Medvedev, Alexander S.; Hartogh, Paul

    2016-07-01

    Gravity waves (GWs) significantly affect temperature and wind fields in the Martian middle and upper atmosphere. They are also one of the observational targets of the MAVEN mission. We report on the first simulations with a high-resolution general circulation model (GCM) and present a global distributions of small-scale GWs in the Martian atmosphere. The simulated GW-induced temperature variances are in a good agreement with available radio occultation data in the lower atmosphere between 10 and 30 km. For the northern winter solstice, the model reveals a latitudinal asymmetry with stronger wave generation in the winter hemisphere and two distinctive sources of GWs: mountainous regions and the meandering winter polar jet. Orographic GWs are filtered upon propagating upward, and the mesosphere is primarily dominated by harmonics with faster horizontal phase velocities. Wave fluxes are directed mainly against the local wind. GW dissipation in the upper mesosphere generates a body force per unit mass of tens of m s^{-1} per Martian solar day (sol^{-1}), which tends to close the simulated jets. The results represent a realistic surrogate for missing observations, which can be used for constraining GW parameterizations and validating GCMs.

  20. A comparison between general circulation model simulations using two sea surface temperature datasets for January 1979

    NASA Technical Reports Server (NTRS)

    Ose, Tomoaki; Mechoso, Carlos; Halpern, David

    1994-01-01

    Simulations with the UCLA atmospheric general circulation model (AGCM) using two different global sea surface temperature (SST) datasets for January 1979 are compared. One of these datasets is based on Comprehensive Ocean-Atmosphere Data Set (COADS) (SSTs) at locations where there are ship reports, and climatology elsewhere; the other is derived from measurements by instruments onboard NOAA satellites. In the former dataset (COADS SST), data are concentrated along shipping routes in the Northern Hemisphere; in the latter dataset High Resolution Infrared Sounder (HIRS SST), data cover the global domain. Ensembles of five 30-day mean fields are obtained from integrations performed in the perpetual-January mode. The results are presented as anomalies, that is, departures of each ensemble mean from that produced in a control simulation with climatological SSTs. Large differences are found between the anomalies obtained using COADS and HIRS SSTs, even in the Northern Hemisphere where the datasets are most similar to each other. The internal variability of the circulation in the control simulation and the simulated atmospheric response to anomalous forcings appear to be linked in that the pattern of geopotential height anomalies obtained using COADS SSTs resembles the first empirical orthogonal function (EOF 1) in the control simulation. The corresponding pattern obtained using HIRS SSTs is substantially different and somewhat resembles EOF 2 in the sector from central North America to central Asia. To gain insight into the reasons for these results, three additional simulations are carried out with SST anomalies confined to regions where COADS SSTs are substantially warmer than HIRS SSTs. The regions correspond to warm pools in the northwest and northeast Pacific, and the northwest Atlantic. These warm pools tend to produce positive geopotential height anomalies in the northeastern part of the corresponding oceans. Both warm pools in the Pacific produce large

  1. A simulation study of control and display requirements for zero-experience general aviation pilots

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    1993-01-01

    The purpose of this simulation study was to define the basic human factor requirements for operating an airplane in all weather conditions. The basic human factors requirements are defined as those for an operator who is a complete novice for airplane operations but who is assumed to have automobile driving experience. These operators thus have had no piloting experience or training of any kind. The human factor requirements are developed for a practical task which includes all of the basic maneuvers required to go from one airport to another airport in limited visibility conditions. The task was quite demanding including following a precise path with climbing and descending turns while simultaneously changing airspeed. The ultimate goal of this research is to increase the utility of general aviation airplanes - that is, to make them a practical mode of transportation for a much larger segment of the general population. This can be accomplished by reducing the training and proficiency requirements of pilots while improving the level of safety. It is believed that advanced technologies such as fly-by-wire (or light), and head-up pictorial displays can be of much greater benefit to the general aviation pilot than to the full-time, professional pilot.

  2. Balancing an accurate representation of the molecular surface in generalized born formalisms with integrator stability in molecular dynamics simulations.

    PubMed

    Chocholousová, Jana; Feig, Michael

    2006-04-30

    Different integrator time steps in NVT and NVE simulations of protein and nucleic acid systems are tested with the GBMV (Generalized Born using Molecular Volume) and GBSW (Generalized Born with simple SWitching) methods. The simulation stability and energy conservation is investigated in relation to the agreement with the Poisson theory. It is found that very close agreement between generalized Born methods and the Poisson theory based on the commonly used sharp molecular surface definition results in energy drift and simulation artifacts in molecular dynamics simulation protocols with standard 2-fs time steps. New parameters are proposed for the GBMV method, which maintains very good agreement with the Poisson theory while providing energy conservation and stable simulations at time steps of 1 to 1.5 fs. PMID:16518883

  3. Relations between winter precipitation and atmospheric circulation simulated by the Geophysical Fluid Dynamics Laboratory general circulation model

    USGS Publications Warehouse

    McCabe, G.J., Jr.; Dettinger, M.D.

    1995-01-01

    General circulation model (GCM) simulations of atmospheric circulation are more reliable than GCM simulations of temperature and precipitation. In this study, temporal correlations between 700 hPa height anomalies simulated winter precipitation at eight locations in the conterminous United States are compared with corresponding correlations in observations. The objectives are to 1) characterize the relations between atmospheric circulation and winter precipitation simulated by the GFDL, GCM for selected locations in the conterminous USA, ii) determine whether these relations are similar to those found in observations of the actual climate system, and iii) determine if GFDL-simulated precipitation is forced by the same circulation patterns as in the real atmosphere. -from Authors

  4. Simulation of Venus polar vortices with the non-hydrostatic general circulation model

    NASA Astrophysics Data System (ADS)

    Rodin, Alexander V.; Mingalev, Oleg; Orlov, Konstantin

    2012-07-01

    The dynamics of Venus atmosphere in the polar regions presents a challenge for general circulation models. Numerous images and hyperspectral data from Venus Express mission shows that above 60 degrees latitude atmospheric motion is substantially different from that of the tropical and extratropical atmosphere. In particular, extended polar hoods composed presumably of fine haze particles, as well as polar vortices revealing mesoscale wave perturbations with variable zonal wavenumbers, imply the significance of vertical motion in these circulation elements. On these scales, however, hydrostatic balance commonly used in the general circulation models is no longer valid, and vertical forces have to be taken into account to obtain correct wind field. We present the first non-hydrostatic general circulation model of the Venus atmosphere based on the full set of gas dynamics equations. The model uses uniform grid with the resolution of 1.2 degrees in horizontal and 200 m in the vertical direction. Thermal forcing is simulated by means of relaxation approximation with specified thermal profile and time scale. The model takes advantage of hybrid calculations on graphical processors using CUDA technology in order to increase performance. Simulations show that vorticity is concentrated at high latitudes within planetary scale, off-axis vortices, precessing with a period of 30 to 40 days. The scale and position of these vortices coincides with polar hoods observed in the UV images. The regions characterized with high vorticity are surrounded by series of small vortices which may be caused by shear instability of the zonal flow. Vertical velocity component implies that in the central part of high vorticity areas atmospheric flow is downwelling and perturbed by mesoscale waves with zonal wavenumbers 1-4, resembling observed wave structures in the polar vortices. Simulations also show the existence of areas with strong vertical flow, concentrated in spiral branches extending

  5. Venus atmosphere simulated by a high-resolution general circulation model

    NASA Astrophysics Data System (ADS)

    Sugimoto, Norihiko

    2016-07-01

    An atmospheric general circulation model (AGCM) for Venus on the basis of AFES (AGCM For the Earth Simulator) have been developed (e.g., Sugimoto et al., 2014a) and a very high-resolution simulation is performed. The highest resolution of the model is T319L120; 960 times 480 horizontal grids (grid intervals are about 40 km) with 120 vertical layers (layer intervals are about 1 km). In the model, the atmosphere is dry and forced by the solar heating with the diurnal and semi-diurnal components. The infrared radiative process is simplified by adopting Newtonian cooling approximation. The temperature is relaxed to a prescribed horizontally uniform temperature distribution, in which a layer with almost neutral static stability observed in the Venus atmosphere presents. A fast zonal wind in a solid-body rotation is given as the initial state. Starting from this idealized superrotation, the model atmosphere reaches a quasi-equilibrium state within 1 Earth year and this state is stably maintained for more than 10 Earth years. The zonal-mean zonal flow with weak midlatitude jets has almost constant velocity of 120 m/s in latitudes between 45°S and 45°N at the cloud top levels, which agrees very well with observations. In the cloud layer, baroclinic waves develop continuously at midlatitudes and generate Rossby-type waves at the cloud top (Sugimoto et al., 2014b). At the polar region, warm polar vortex zonally surrounded by a cold latitude band (cold collar) is well reproduced (Ando et al., 2016). As for horizontal kinetic energy spectra, divergent component is broadly (k>10) larger than rotational component compared with that on Earth (Kashimura et al., in preparation). Finally, recent results for thermal tides and small-scale waves will be shown in the presentation. Sugimoto, N. et al. (2014a), Baroclinic modes in the Venus atmosphere simulated by GCM, Journal of Geophysical Research: Planets, Vol. 119, p1950-1968. Sugimoto, N. et al. (2014b), Waves in a Venus general

  6. Development and Implementation of Non-Newtonian Rheology Into the Generalized Fluid System Simulation Program (GFSSP)

    NASA Technical Reports Server (NTRS)

    DiSalvo, Roberto; Deaconu, Stelu; Majumdar, Alok

    2006-01-01

    One of the goals of this program was to develop the experimental and analytical/computational tools required to predict the flow of non-Newtonian fluids through the various system components of a propulsion system: pipes, valves, pumps etc. To achieve this goal we selected to augment the capabilities of NASA's Generalized Fluid System Simulation Program (GFSSP) software. GFSSP is a general-purpose computer program designed to calculate steady state and transient pressure and flow distributions in a complex fluid network. While the current version of the GFSSP code is able to handle various systems components the implicit assumption in the code is that the fluids in the system are Newtonian. To extend the capability of the code to non-Newtonian fluids, such as silica gelled fuels and oxidizers, modifications to the momentum equations of the code have been performed. We have successfully implemented in GFSSP flow equations for fluids with power law behavior. The implementation of the power law fluid behavior into the GFSSP code depends on knowledge of the two fluid coefficients, n and K. The determination of these parameters for the silica gels used in this program was performed experimentally. The n and K parameters for silica water gels were determined experimentally at CFDRC's Special Projects Laboratory, with a constant shear rate capillary viscometer. Batches of 8:1 (by weight) water-silica gel were mixed using CFDRC s 10-gallon gelled propellant mixer. Prior to testing the gel was allowed to rest in the rheometer tank for at least twelve hours to ensure that the delicate structure of the gel had sufficient time to reform. During the tests silica gel was pressure fed and discharged through stainless steel pipes ranging from 1", to 36", in length and three diameters; 0.0237", 0.032", and 0.047". The data collected in these tests included pressure at tube entrance and volumetric flowrate. From these data the uncorrected shear rate, shear stress, residence time

  7. Field Evaluation of the Generalized Maintenance Trainer-Simulator: I. Fleet Communications System. ; Technical Report No. 89.

    ERIC Educational Resources Information Center

    Rigney, J. W.; And Others

    This report describes the Generalized Maintenance Trainer-Simulator (GMTS), an instructional system designed to give electronics students intensive troubleshooting practice in a simulated hands-on training environment, and reports on a field evaluation of the GMTS applied to systems level troubleshooting in radio communications. The GMTS can be…

  8. Real time simulation of nonlinear generalized predictive control for wind energy conversion system with nonlinear observer.

    PubMed

    Ouari, Kamel; Rekioua, Toufik; Ouhrouche, Mohand

    2014-01-01

    In order to make a wind power generation truly cost-effective and reliable, an advanced control techniques must be used. In this paper, we develop a new control strategy, using nonlinear generalized predictive control (NGPC) approach, for DFIG-based wind turbine. The proposed control law is based on two points: NGPC-based torque-current control loop generating the rotor reference voltage and NGPC-based speed control loop that provides the torque reference. In order to enhance the robustness of the controller, a disturbance observer is designed to estimate the aerodynamic torque which is considered as an unknown perturbation. Finally, a real-time simulation is carried out to illustrate the performance of the proposed controller. PMID:24021543

  9. A fully general relativistic numerical simulation code for spherically symmetric matter

    NASA Astrophysics Data System (ADS)

    Park, Dong-Ho; Cho, Inyong; Kang, Gungwon; Lee, Hyung Mok

    2013-02-01

    We present a fully general relativistic open-source code that can be used for simulating a system of spherically symmetric perfect fluid matter. It is based on the Arnowitt-Deser-Misner 3+1 formalism with maximal slicing and isotropic spatial coordinates. For hydrodynamic matter High Resolution Shock Capturing (HRSC) schemes with a monotonized central-difference limiter and approximated Riemann solvers are used in the Eulerian viewpoint. The accuracy and the convergence of our numerical code are verified by performing several test problems. These include a relativistic blast wave, relativistic spherical accretion of matter into a black hole, Tolman-Oppenheimer-Volkoff (TOV) stars and Oppenheimer-Snyder (OS) dust collapses. In particular, a dynamical code test is done for the OS collapse by explicitly performing numerical coordinate transformations between our coordinate 8system and the one used for the analytic solution. Finally, some TOV star solutions are presented for the Eddington-inspired Born-Infeld gravity theory.

  10. A generalized model for simulating adsorption on porous media and checking for reversibility by desorption

    NASA Astrophysics Data System (ADS)

    Batzias, Fragiskos; Bountri, Athanasia; Sidiras, Dimitris

    2012-12-01

    Most adsorption kinetic models are of integer order (mainly of first and to a lesser extent of second order) with two parameters (rate constant and equilibrium parameter) and without an intercept, when used in their analytic form. In this work, we derive a four-parameter nth-order (n being not an integer, in general) model, simulating adsorption on porous media. We proved that this model implied best fitting to experimental data of dye adsorption on fir sawdust. Subsequently, a criterion of competitiveness is presented to find out which simplified form of a pre-set order is the second best, in order to obtain parameter values comparable to results already stored in corresponding Data Bases. Partial reversibility was also confirmed by desorption, from saturated-with-dye biomass to aquatic solution, using a Friendlichtype desorption isotherm.

  11. Verification of a generalized Aboav-Weaire law via experiment and large-scale simulation

    NASA Astrophysics Data System (ADS)

    Wang, H.; Liu, G. Q.; Chen, Y.; Li, W. W.

    2014-03-01

    Topological correlations in grain boundary networks are investigated on the basis of more than 14000 experimental grains and 9000 Monte Carlo-Potts model simulation grains. A generalized Aboav-Weaire law which serves as a description of the short- and long-range nearest-neighbor topological correlations, is proved to hold both in 2D grain structures and 2D cross-section structures. However, the nearest-neighbor topological correlations have no obvious influence on the rate of 2D grain growth, which is explicitly different from the case in three dimensions that was previously reported in Wang H., Liu G. Q., Song X. Y. and Luan J. H., EPL, 96 (2011) 38003.

  12. Strong scaling of general-purpose molecular dynamics simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Glaser, Jens; Nguyen, Trung Dac; Anderson, Joshua A.; Lui, Pak; Spiga, Filippo; Millan, Jaime A.; Morse, David C.; Glotzer, Sharon C.

    2015-07-01

    We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, 2013). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al., 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5 ×.

  13. Finite-difference simulation and visualization of elastodynamics in time-evolving generalized curvilinear coordinates

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K. (Inventor)

    2009-01-01

    Modeling and simulation of free and forced structural vibrations is essential to an overall structural health monitoring capability. In the various embodiments, a first principles finite-difference approach is adopted in modeling a structural subsystem such as a mechanical gear by solving elastodynamic equations in generalized curvilinear coordinates. Such a capability to generate a dynamic structural response is widely applicable in a variety of structural health monitoring systems. This capability (1) will lead to an understanding of the dynamic behavior of a structural system and hence its improved design, (2) will generate a sufficiently large space of normal and damage solutions that can be used by machine learning algorithms to detect anomalous system behavior and achieve a system design optimization and (3) will lead to an optimal sensor placement strategy, based on the identification of local stress maxima all over the domain.

  14. Simulation of Indian Monsoon Variability in the Medieval Warm Period using ECHAM5 General Circulation Model

    NASA Astrophysics Data System (ADS)

    Polanski, Stefan; Fallah, Bijan; Prasad, Sushma; Cubasch, Ulrich

    2013-04-01

    Within the framework of the DFG research group HIMPAC, the general circulation model ECHAM5 has been used to simulate the Indian monsoon and its variability during the Medieval Warm Period (MWP; 900-1100 AD) and for recent climate (REC; 1800-2000 AD). The focus is on the analysis of internal and external drivers leading to extreme rainfall events over India from interannual to multidecadal time scale. An evaluation of spatio-temporal monsoon patterns with present-day observation data is in agreement with other state-of-the-art monsoon modeling studies. The simulated monsoon intensity on multidecadal time scale is weakened (enhanced) in summer (winter) due to colder (warmer) SSTs in the Indian Ocean. Variations in solar insolation are the main drivers for these SST anomalies, verified by very high temporal correlations between Total Solar Irradiance and All-India-Monsoon-Rainfall in summer monsoon months (-0.95). The external solar forcing is coupled and overlain by internal climate modes of the Ocean (ENSO and IOD) with asynchronous intensities and lengths of periods. In addition, the model simulations have been compared with a relative moisture index derived from paleoclimatic reconstructions based on various proxies and archives in India (Anoop et al., 2012 (under revision); Bhattacharya et al., 2007; Chauhan et al., 2000; Denniston et al., 2000; Ely et al., 1999; Kar et al., 2002; Ponton et al., 2012; Prasad et al., 2012 (under revision)). In this context, the reconstructed climate of the well-dated Lonar record in Central India has been highlighted and evaluated the first time (Anoop et al., 2012 (under revision); Prasad et al., 2012 (under revision)). Particularly with regard to the long continuously chronology of the last 11000 years, the Lonar site gives a unique possibility for a comparison of long-term climate time series. The simulated relative annual rainfall anomalies ("MWP" minus "REC") are in agreement with the reconstructed moisture index. The dry

  15. Monte Carlo Simulations of PAC-Spectra as a General Approach to Dynamic Interactions

    NASA Astrophysics Data System (ADS)

    Danielsen, Eva; Jørgensen, Lars Elkjær; Sestoft, Peter

    Time Dependent Perturbed Angular Correlations of γ-rays (PAC) can be used to study hyperfine interactions of a dynamic nature. However, the exact effect of the dynamic interaction on the PAC-spectrum is sometimes difficult to derive analytically. A new approach based on Monte Carlo simulations is therefore suggested, here implemented as a Fortran 90 program for simulating PAC spectra of dynamic electric field gradients of any origin. The program is designed for the most common experimental condition where the intermediate level has spin 5/2, but the approach can equally well be used for other spin states. Codes for 4 different situations have been developed: (1) Rotational diffusion by jumps; used as a test case. (2) Jumps between two states with different electric field gradients, different lifetimes and different orientations of the electric field gradient principal axes. (3) Relaxation of one state to another. (4) Molecules adhering to a surface with random rotational jumps around the axis perpendicular to the surface. To illustrate how this approach can be used to improve data-interpretation, previously published data on 111mCd-plastocyanin and 111Ag-plastocyanin are re-considered. The strength of this novel approach is its simplicity and generality so that other dynamic processes can easily be included by only adding new program units describing the random process behind the dynamics. The program is hereby made publicly available.

  16. A general method for spatially coarse-graining Metropolis Monte Carlo simulations onto a lattice

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Seider, Warren D.; Sinno, Talid

    2013-03-01

    A recently introduced method for coarse-graining standard continuous Metropolis Monte Carlo simulations of atomic or molecular fluids onto a rigid lattice of variable scale [X. Liu, W. D. Seider, and T. Sinno, Phys. Rev. E 86, 026708 (2012)], 10.1103/PhysRevE.86.026708 is further analyzed and extended. The coarse-grained Metropolis Monte Carlo technique is demonstrated to be highly consistent with the underlying full-resolution problem using a series of detailed comparisons, including vapor-liquid equilibrium phase envelopes and spatial density distributions for the Lennard-Jones argon and simple point charge water models. In addition, the principal computational bottleneck associated with computing a coarse-grained interaction function for evolving particle positions on the discretized domain is addressed by the introduction of new closure approximations. In particular, it is shown that the coarse-grained potential, which is generally a function of temperature and coarse-graining level, can be computed at multiple temperatures and scales using a single set of free energy calculations. The computational performance of the method relative to standard Monte Carlo simulation is also discussed.

  17. General continuum boundary conditions for miscible binary fluids from molecular dynamics simulations.

    PubMed

    Denniston, Colin; Robbins, Mark O

    2006-12-01

    Molecular dynamics simulations are used to explore the flow behavior and diffusion of miscible fluids near solid surfaces. The solid produces deviations from bulk fluid behavior that decay over a distance of the order of the fluid correlation length. Atomistic results are mapped onto two types of continuum model: Mesoscopic models that follow this decay and conventional sharp interface boundary conditions for the stress and velocity. The atomistic results, and mesoscopic models derived from them, are consistent with the conventional Marangoni stress boundary condition. However, there are deviations from the conventional Navier boundary condition that states that the slip velocity between wall and fluid is proportional to the strain rate. A general slip boundary condition is derived from the mesoscopic model that contains additional terms associated with the Marangoni stress and diffusion, and is shown to describe the atomistic simulations. The additional terms lead to strong flows when there is a concentration gradient. The potential for using this effect to make a nanomotor or pump is evaluated. PMID:17166010

  18. Large-eddy simulation of airflow and heat transfer in a general ward of hospital

    NASA Astrophysics Data System (ADS)

    Hasan, Md. Farhad; Himika, Taasnim Ahmed; Molla, Md. Mamun

    2016-07-01

    In this paper, a very popular alternative computational technique, the Lattice Boltzmann Method (LBM) has been used for Large-Eddy Simulation (LES) of airflow and heat transfer in general ward of hospital. Different Reynolds numbers have been used to study the airflow pattern. In LES, Smagorinsky turbulence model has been considered and a discussion has been conducted in brief. A code validation has been performed comparing the present results with benchmark results for lid-driven cavity problem and the results are found to agree very well. LBM is demonstrated through simulation in forced convection inside hospital ward with six beds with a partition in the middle, which acted like a wall. Changes in average rate of heat transfer in terms of average Nusselt numbers have also been recorded in tabular format and necessary comparison has been showed. It was found that partition narrowed the path for airflow and once the air overcame this barrier, it got free space and turbulence appeared. For higher turbulence, the average rate of heat transfer increased and patients near the turbulence zone released maximum heat and felt more comfortable.

  19. General Force-Field Parametrization Scheme for Molecular Dynamics Simulations of Conjugated Materials in Solution.

    PubMed

    Wildman, Jack; Repiščák, Peter; Paterson, Martin J; Galbraith, Ian

    2016-08-01

    We describe a general scheme to obtain force-field parameters for classical molecular dynamics simulations of conjugated polymers. We identify a computationally inexpensive methodology for calculation of accurate intermonomer dihedral potentials and partial charges. Our findings indicate that the use of a two-step methodology of geometry optimization and single-point energy calculations using DFT methods produces potentials which compare favorably to high level theory calculation. We also report the effects of varying the conjugated backbone length and alkyl side-chain lengths on the dihedral profiles and partial charge distributions and determine the existence of converged lengths above which convergence is achieved in the force-field parameter sets. We thus determine which calculations are required for accurate parametrization and the scope of a given parameter set for variations to a given molecule. We perform simulations of long oligomers of dioctylfluorene and hexylthiophene in explicit solvent and find peristence lengths and end-length distributions consistent with experimental values. PMID:27397762

  20. Application of the general thermal field model to simulate the behaviour of nanoscale Cu field emitters

    SciTech Connect

    Eimre, Kristjan; Aabloo, Alvo; Parviainen, Stefan Djurabekova, Flyura; Zadin, Vahur

    2015-07-21

    Strong field electron emission from a nanoscale tip can cause a temperature rise at the tip apex due to Joule heating. This becomes particularly important when the current value grows rapidly, as in the pre-breakdown (the electrostatic discharge) condition, which may occur near metal surfaces operating under high electric fields. The high temperatures introduce uncertainties in calculations of the current values when using the Fowler–Nordheim equation, since the thermionic component in such conditions cannot be neglected. In this paper, we analyze the field electron emission currents as the function of the applied electric field, given by both the conventional Fowler–Nordheim field emission and the recently developed generalized thermal field emission formalisms. We also compare the results in two limits: discrete (atomistic simulations) and continuum (finite element calculations). The discrepancies of both implementations and their effect on final results are discussed. In both approaches, the electric field, electron emission currents, and Joule heating processes are simulated concurrently and self-consistently. We show that the conventional Fowler–Nordheim equation results in significant underestimation of electron emission currents. We also show that Fowler–Nordheim plots used to estimate the field enhancement factor may lead to significant overestimation of this parameter especially in the range of relatively low electric fields.

  1. Evaluating Parameterizations in General Circulation Models: Climate Simulation Meets Weather Prediction

    SciTech Connect

    Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J

    2004-05-06

    To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands that the GCM parameterizations of unresolved processes, in particular, should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provided that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by a realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be tested in the same framework. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the U.S. Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM.

  2. General Force-Field Parametrization Scheme for Molecular Dynamics Simulations of Conjugated Materials in Solution

    PubMed Central

    2016-01-01

    We describe a general scheme to obtain force-field parameters for classical molecular dynamics simulations of conjugated polymers. We identify a computationally inexpensive methodology for calculation of accurate intermonomer dihedral potentials and partial charges. Our findings indicate that the use of a two-step methodology of geometry optimization and single-point energy calculations using DFT methods produces potentials which compare favorably to high level theory calculation. We also report the effects of varying the conjugated backbone length and alkyl side-chain lengths on the dihedral profiles and partial charge distributions and determine the existence of converged lengths above which convergence is achieved in the force-field parameter sets. We thus determine which calculations are required for accurate parametrization and the scope of a given parameter set for variations to a given molecule. We perform simulations of long oligomers of dioctylfluorene and hexylthiophene in explicit solvent and find peristence lengths and end-length distributions consistent with experimental values. PMID:27397762

  3. Longitudinal biases in the Seychelles Dome simulated by 35 ocean-atmosphere coupled general circulation models

    NASA Astrophysics Data System (ADS)

    Nagura, Motoki; Sasaki, Wataru; Tozuka, Tomoki; Luo, Jing-Jia; Behera, Swadhin K.; Yamagata, Toshio

    2013-02-01

    Seychelles Dome refers to the shallow climatological thermocline in the southwestern Indian Ocean, where ocean wave dynamics efficiently affect sea surface temperature, allowing sea surface temperature anomalies to be predicted up to 1-2 years in advance. Accurate reproduction of the dome by ocean-atmosphere coupled general circulation models (CGCMs) is essential for successful seasonal predictions in the Indian Ocean. This study examines the Seychelles Dome as simulated by 35 CGCMs, including models used in phase five of the Coupled Model Intercomparison Project (CMIP5). Among the 35 CGCMs, 14 models erroneously produce an upwelling dome in the eastern half of the basin whereas the observed Seychelles Dome is located in the southwestern tropical Indian Ocean. The annual mean Ekman pumping velocity in these models is found to be almost zero in the southern off-equatorial region. This result is inconsistent with observations, in which Ekman upwelling acts as the main cause of the Seychelles Dome. In the models reproducing an eastward-displaced dome, easterly biases are prominent along the equator in boreal summer and fall, which result in shallow thermocline biases along the Java and Sumatra coasts via Kelvin wave dynamics and a spurious upwelling dome in the region. Compared to the CMIP3 models, the CMIP5 models are even worse in simulating the dome longitudes.

  4. General relativistic magnetohydrodynamic simulations of binary neutron star mergers with the APR4 equation of state

    NASA Astrophysics Data System (ADS)

    Endrizzi, A.; Ciolfi, R.; Giacomazzo, B.; Kastaun, W.; Kawamura, T.

    2016-08-01

    We present new results of fully general relativistic magnetohydrodynamic simulations of binary neutron star (BNS) mergers performed with the Whisky code. All the models use a piecewise polytropic approximation of the APR4 equation of state for cold matter, together with a ‘hybrid’ part to incorporate thermal effects during the evolution. We consider both equal and unequal-mass models, with total masses such that either a supramassive NS or a black hole is formed after merger. Each model is evolved with and without a magnetic field initially confined to the stellar interior. We present the different gravitational wave (GW) signals as well as a detailed description of the matter dynamics (magnetic field evolution, ejected mass, post-merger remnant/disk properties). Our simulations provide new insights into BNS mergers, the associated GW emission and the possible connection with the engine of short gamma-ray bursts (both in the ‘standard’ and in the ‘time-reversal’ scenarios) and other electromagnetic counterparts.

  5. TOUGH2: A general-purpose numerical simulator for multiphase nonisothermal flows

    SciTech Connect

    Pruess, K.

    1991-06-01

    Numerical simulators for multiphase fluid and heat flows in permeable media have been under development at Lawrence Berkeley Laboratory for more than 10 yr. Real geofluids contain noncondensible gases and dissolved solids in addition to water, and the desire to model such `compositional` systems led to the development of a flexible multicomponent, multiphase simulation architecture known as MULKOM. The design of MULKOM was based on the recognition that the mass-and energy-balance equations for multiphase fluid and heat flows in multicomponent systems have the same mathematical form, regardless of the number and nature of fluid components and phases present. Application of MULKOM to different fluid mixtures, such as water and air, or water, oil, and gas, is possible by means of appropriate `equation-of-state` (EOS) modules, which provide all thermophysical and transport parameters of the fluid mixture and the permeable medium as a function of a suitable set of primary thermodynamic variables. Investigations of thermal and hydrologic effects from emplacement of heat-generating nuclear wastes into partially water-saturated formations prompted the development and release of a specialized version of MULKOM for nonisothermal flow of water and air, named TOUGH. TOUGH is an acronym for `transport of unsaturated groundwater and heat` and is also an allusion to the tuff formations at Yucca Mountain, Nevada. The TOUGH2 code is intended to supersede TOUGH. It offers all the capabilities of TOUGH and includes a considerably more general subset of MULKOM modules with added capabilities. The paper briefly describes the simulation methodology and user features.

  6. Internal versus SST-forced atmospheric variability as simulated by an atmospheric general circulation model

    SciTech Connect

    Harzallah, A.; Sadourny, R.

    1995-03-01

    The variability of atmospheric flow is analyzed by separating it into an internal part due to atmospheric dynamics only and an external (or forced) part due to the variability of sea surface temperature forcing. The two modes of variability are identified by performing an ensemble of seven independent long-term simulations of the atmospheric response to observed SST (1970-1988) with the LMD atmospheric general circulation model. The forced variability is defined from the analysis of the ensemble mean and the internal variability from the analysis of deviations from the ensemble mean. Emphasis is put on interannual variability of sea level pressure and 500-hPa geopotential height for the Northern Hemisphere winter. In view of the large systematic errors related to the relatively small number of realizations, unbiased variance estimators have been developed. Although statistical significance is not reached in some extratropical regions, large significant extratropical responses are found at the North Pacific-Alaska sector for SLP and over western Canada and the Aleutians for 500-hPa geopotential height. The influence of SST variations on internal variability is also examined by using a 7-year simulation using the climatological SST seasonal cycle. It is found that interannual SST changes strongly influence the geographical distribution of internal variability; in particular, it tends to increase it over oceans. EOF decompositions, showing that the model realistically simulates the leading observed variability modes. The geographical structure of internal variability patterns is found to be similar to that of total variability, although similar modes tend to evolve rather differently in time. The zonally symmetric seesaw dominates the internal variability for both observed and climatologically prescribed SST. 46 refs., 15 figs., 3 tabs.

  7. An Experimental Analysis of General Case Simulation Instruction and the Establishment and Maintenance of Work Performance in Severely Handicapped Students.

    ERIC Educational Resources Information Center

    Woolcock, William Woodrow

    This doctoral dissertation examines the extent to which general case simulation instruction on a janitorial task sequence and a housekeeping task sequence conducted with four secondary and postsecondary age persons with moderate mental retardation resulted in generalized performance. A multiple baseline design across subjects and behaviors was…

  8. A Multi-mission Event-Driven Component-Based System for Support of Flight Software Development, ATLO, and Operations first used by the Mars Science Laboratory (MSL) Project

    NASA Technical Reports Server (NTRS)

    Dehghani, Navid; Tankenson, Michael

    2006-01-01

    This viewgraph presentation reviews the architectural description of the Mission Data Processing and Control System (MPCS). MPCS is an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is designed with these factors (1) Enabling plug and play architecture (2) MPCS has strong inheritance from GDS components that have been developed for other Flight Projects (MER, MRO, DAWN, MSAP), and are currently being used in operations and ATLO, and (3) MPCS components are Java-based, platform independent, and are designed to consume and produce XML-formatted data

  9. General Fluid System Simulation Program to Model Secondary Flows in Turbomachinery

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; Van Hoosier, Katherine P.

    1995-01-01

    The complexity and variety of turbomachinery flow circuits created a need for a general fluid system simulation program for test data anomaly resolution as well as design review. The objective of the paper is to present a computer program that has been developed to support Marshall Space Flight Center's turbomachinery internal flow analysis efforts. The computer program solves for the mass. energy and species conservation equation at each node and flow rate equation at each branch of the network by a novel numerical procedure which is a combination of both Newton-Ralphson and successive substitution method and uses a thermodynamic property program for computing real gas properties. A generalized, robust, modular, and 'user-friendly' computer program has been developed to model internal flow rates, pressures, temperatures, concentrations of gas mixtures and axial thrusts. The program can be used for any network for compressible and incompressible flows, choked flow, change of phase and gaseous mixturecs. The code has been validated by comparing the predictions with Space Shuttle Main Engine test data.

  10. Generalized SIMD algorithm for efficient EM-PIC simulations on modern CPUs

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo; Decyk, Viktor; Mori, Warren; Silva, Luis

    2012-10-01

    There are several relevant plasma physics scenarios where highly nonlinear and kinetic processes dominate. Further understanding of these scenarios is generally explored through relativistic particle-in-cell codes such as OSIRIS [1], but this algorithm is computationally intensive, and efficient use high end parallel HPC systems, exploring all levels of parallelism available, is required. In particular, most modern CPUs include a single-instruction-multiple-data (SIMD) vector unit that can significantly speed up the calculations. In this work we present a generalized PIC-SIMD algorithm that is shown to work efficiently with different CPU (AMD, Intel, IBM) and vector unit types (2-8 way, single/double). Details on the algorithm will be given, including the vectorization strategy and memory access. We will also present performance results for the various hardware variants analyzed, focusing on floating point efficiency. Finally, we will discuss the applicability of this type of algorithm for EM-PIC simulations on GPGPU architectures [2]. [4pt] [1] R. A. Fonseca et al., LNCS 2331, 342, (2002)[0pt] [2] V. K. Decyk, T. V. Singh; Comput. Phys. Commun. 182, 641-648 (2011)

  11. Theory and Simulation of Magnetohydrodynamic Dynamos and Faraday Rotation for Plasmas of General Composition

    NASA Astrophysics Data System (ADS)

    Park, Kiwan

    2013-03-01

    seed magnetic fields by a mechanism known as ``alpha effect." The generalized theory systematically explains the simulation results, showing how magnetic energy is inversely cascaded from small to large scales, and how the large scale field growth saturates. In addition to work on the nonlinear saturation of large scale magnetic fields, the thesis also includes a study of the influence of the magnitude and distribution of the magnetic energy on the large scale field growth rate in the last chapter. Since the large scale dynamos of most astrophysical objects are likely not yet in a resistively saturated state (due to the high conductivity of astrophysical plasmas), the evolution of the magnetic field in the pre-saturation regime is most important. The results show that, within the limitations of the present study, the effect of the initial field distribution on the large scale field growth is limited only to the early growth regime, not the saturated time regime.

  12. General-relativistic magnetohydrodynamics simulations of black hole accretion disks: Dynamics and radiative properties

    NASA Astrophysics Data System (ADS)

    Shiokawa, Hotaka

    The goal of the series of studies in this thesis is to understand the black hole accretion process and predict its observational properties. The highly non-linear process involves a turbulent magnetized plasma in a general relativistic regime, thus making it hard to study analytically. We use numerical simulations, specifically general relativistic magnetohydrodynamics (GRMHD), to construct a realistic dynamical and radiation model of accretion disks. Our simulations are for black holes in low luminous regimes that probably possesses a hot and thick accretion disk. Flows in this regime are called radiatively inefficient accretion flows (RIAF). The most plausible mechanism for transporting angular momentum is turbulence induced by magnetorotational instability (MRI). The RIAF model has been used to model the supermassive black hole at the center of our Milky Way galaxy, Sagittarius A* (Sgr A*). Owing to its proximity, rich observational data of Sgr A* is available to compare with the simulation results. We focus mainly on four topics. First, we analyse numerical convergence of 3D GRMHD global disk simulations. Convergence is one of the essential factors in deciding quantitative outcomes of the simulations. We analyzed dimensionless shell-averaged quantities such as plasma beta, the azimuthal correlation length (angle) of fluid variables, and spectra of the source for four different resolutions. We found that all the variables converged with the highest resolution (384x384x256 in radial, poloidal, and azimuthal directions) except the magnetic field correlation length. It probably requires another factor of 2 in resolution to achieve convergence. Second, we studied the effect of equation of state on dynamics of GRMHD simulation and radiative transfer. Temperature of RIAF gas is high, and all the electrons are relativistic, but not the ions. In addition, the dynamical time scale of the accretion disk is shorter than the collisional time scale of electrons and ions

  13. KMCLib: A general framework for lattice kinetic Monte Carlo (KMC) simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2014-09-01

    KMCLib is a general framework for lattice kinetic Monte Carlo (KMC) simulations. The program can handle simulations of the diffusion and reaction of millions of particles in one, two, or three dimensions, and is designed to be easily extended and customized by the user to allow for the development of complex custom KMC models for specific systems without having to modify the core functionality of the program. Analysis modules and on-the-fly elementary step diffusion rate calculations can be implemented as plugins following a well-defined API. The plugin modules are loosely coupled to the core KMCLib program via the Python scripting language. KMCLib is written as a Python module with a backend C++ library. After initial compilation of the backend library KMCLib is used as a Python module; input to the program is given as a Python script executed using a standard Python interpreter. We give a detailed description of the features and implementation of the code and demonstrate its scaling behavior and parallel performance with a simple one-dimensional A-B-C lattice KMC model and a more complex three-dimensional lattice KMC model of oxygen-vacancy diffusion in a fluorite structured metal oxide. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Catalogue identifier: AESZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 49 064 No. of bytes in distributed program, including test data, etc.: 1 575 172 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer that can run a C++ compiler and a Python interpreter. Operating system: Tested on Ubuntu 12

  14. Magnetorotational collapse of massive stellar cores to neutron stars: Simulations in full general relativity

    NASA Astrophysics Data System (ADS)

    Shibata, Masaru; Liu, Yuk Tung; Shapiro, Stuart L.; Stephens, Branson C.

    2006-11-01

    We study magnetohydrodynamic (MHD) effects arising in the collapse of magnetized, rotating, massive stellar cores to proto-neutron stars (PNSs). We perform axisymmetric numerical simulations in full general relativity with a hybrid equation of state. The formation and early evolution of a PNS are followed with a grid of 2500×2500 zones, which provides better resolution than in previous (Newtonian) studies. We confirm that significant differential rotation results even when the rotation of the progenitor is initially uniform. Consequently, the magnetic field is amplified both by magnetic winding and the magnetorotational instability (MRI). Even if the magnetic energy EEM is much smaller than the rotational kinetic energy Trot at the time of PNS formation, the ratio EEM/Trot increases to 0.1 0.2 by the magnetic winding. Following PNS formation, MHD outflows lead to losses of rest mass, energy, and angular momentum from the system. The earliest outflow is produced primarily by the increasing magnetic stress caused by magnetic winding. The MRI amplifies the poloidal field and increases the magnetic stress, causing further angular momentum transport and helping to drive the outflow. After the magnetic field saturates, a nearly stationary, collimated magnetic field forms near the rotation axis and a Blandford-Payne type outflow develops along the field lines. These outflows remove angular momentum from the PNS at a rate given by J˙˜ηEEMCB, where η is a constant of order ˜0.1 and CB is a typical ratio of poloidal to toroidal field strength. As a result, the rotation period quickly increases for a strongly magnetized PNS until the degree of differential rotation decreases. Our simulations suggest that rapidly rotating, magnetized PNSs may not give rise to rapidly rotating neutron stars.

  15. Generalized Scalable Multiple Copy Algorithms for Molecular Dynamics Simulations in NAMD.

    PubMed

    Jiang, Wei; Phillips, James C; Huang, Lei; Fajer, Mikolai; Meng, Yilin; Gumbart, James C; Luo, Yun; Schulten, Klaus; Roux, Benoît

    2014-03-01

    Computational methodologies that couple the dynamical evolution of a set of replicated copies of a system of interest offer powerful and flexible approaches to characterize complex molecular processes. Such multiple copy algorithms (MCAs) can be used to enhance sampling, compute reversible work and free energies, as well as refine transition pathways. Widely used examples of MCAs include temperature and Hamiltonian-tempering replica-exchange molecular dynamics (T-REMD and H-REMD), alchemical free energy perturbation with lambda replica-exchange (FEP/λ-REMD), umbrella sampling with Hamiltonian replica exchange (US/H-REMD), and string method with swarms-of-trajectories conformational transition pathways. Here, we report a robust and general implementation of MCAs for molecular dynamics (MD) simulations in the highly scalable program NAMD built upon the parallel programming system Charm++. Multiple concurrent NAMD instances are launched with internal partitions of Charm++ and located continuously within a single communication world. Messages between NAMD instances are passed by low-level point-to-point communication functions, which are accessible through NAMD's Tcl scripting interface. The communication-enabled Tcl scripting provides a sustainable application interface for end users to realize generalized MCAs without modifying the source code. Illustrative applications of MCAs with fine-grained inter-copy communication structure, including global lambda exchange in FEP/λ-REMD, window swapping US/H-REMD in multidimensional order parameter space, and string method with swarms-of-trajectories were carried out on IBM Blue Gene/Q to demonstrate the versatility and massive scalability of the present implementation. PMID:24944348

  16. Sleep promotes consolidation and generalization of extinction learning in simulated exposure therapy for spider fear.

    PubMed

    Pace-Schott, Edward F; Verga, Patrick W; Bennett, Tobias S; Spencer, Rebecca M C

    2012-08-01

    Simulated exposure therapy for spider phobia served as a clinically naturalistic model to study effects of sleep on extinction. Spider-fearing, young adult women (N = 66), instrumented for skin conductance response (SCR), heart rate acceleration (HRA) and corrugator electromyography (EMG), viewed 14 identical 1-min videos of a behaving spider before a 12-hr delay containing a normal night's Sleep (N = 20) or continuous daytime Wake (N = 23), or a 2-hr delay of continuous wake in the Morning (N = 11) or Evening (N = 12). Following the delay, all groups viewed this same video 6 times followed by six 1-min videos of a novel spider. After each video, participants rated disgust, fearfulness and unpleasantness. In all 4 groups, all measures except corrugator EMG diminished across Session 1 (extinction learning) and, excepting SCR to a sudden noise, increased from the old to novel spider in Session 2. In Wake only, summed subjective ratings and SCR to the old spider significantly increased across the delay (extinction loss) and were greater for the novel vs. the old spider when it was equally novel at the beginning of Session 1 (sensitization). In Sleep only, SCR to a sudden noise decreased across the inter-session delay (extinction augmentation) and, along with HRA, was lower to the novel spider than initially to the old spider in Session 1 (extinction generalization). None of the above differentiated Morning and Evening groups suggesting that intervening sleep, rather than time-of-testing, produced differences between Sleep and Wake. Thus, sleep following exposure therapy may promote retention and generalization of extinction learning. PMID:22578824

  17. KMCLib: A general framework for lattice kinetic Monte Carlo (KMC) simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2014-09-01

    KMCLib is a general framework for lattice kinetic Monte Carlo (KMC) simulations. The program can handle simulations of the diffusion and reaction of millions of particles in one, two, or three dimensions, and is designed to be easily extended and customized by the user to allow for the development of complex custom KMC models for specific systems without having to modify the core functionality of the program. Analysis modules and on-the-fly elementary step diffusion rate calculations can be implemented as plugins following a well-defined API. The plugin modules are loosely coupled to the core KMCLib program via the Python scripting language. KMCLib is written as a Python module with a backend C++ library. After initial compilation of the backend library KMCLib is used as a Python module; input to the program is given as a Python script executed using a standard Python interpreter. We give a detailed description of the features and implementation of the code and demonstrate its scaling behavior and parallel performance with a simple one-dimensional A-B-C lattice KMC model and a more complex three-dimensional lattice KMC model of oxygen-vacancy diffusion in a fluorite structured metal oxide. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Catalogue identifier: AESZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 49 064 No. of bytes in distributed program, including test data, etc.: 1 575 172 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer that can run a C++ compiler and a Python interpreter. Operating system: Tested on Ubuntu 12

  18. Hidden Conformation Events in DNA Base Extrusions: A Generalized Ensemble Path Optimization and Equilibrium Simulation Study.

    PubMed

    Cao, Liaoran; Lv, Chao; Yang, Wei

    2013-08-13

    DNA base extrusion is a crucial component of many biomolecular processes. Elucidating how bases are selectively extruded from the interiors of double-strand DNAs is pivotal to accurately understanding and efficiently sampling this general type of conformational transitions. In this work, the on-the-path random walk (OTPRW) method, which is the first generalized ensemble sampling scheme designed for finite-temperature-string path optimizations, was improved and applied to obtain the minimum free energy path (MFEP) and the free energy profile of a classical B-DNA major-groove base extrusion pathway. Along the MFEP, an intermediate state and the corresponding transition state were located and characterized. The MFEP result suggests that a base-plane-elongation event rather than the commonly focused base-flipping event is dominant in the transition state formation portion of the pathway; and the energetic penalty at the transition state is mainly introduced by the stretching of the Watson-Crick base pair. Moreover to facilitate the essential base-plane-elongation dynamics, the surrounding environment of the flipped base needs to be intimately involved. Further taking the advantage of the extended-dynamics nature of the OTPRW Hamiltonian, an equilibrium generalized ensemble simulation was performed along the optimized path; and based on the collected samples, several base-flipping (opening) angle collective variables were evaluated. In consistence with the MFEP result, the collective variable analysis result reveals that none of these commonly employed flipping (opening) angles alone can adequately represent the base extrusion pathway, especially in the pre-transition-state portion. As further revealed by the collective variable analysis, the base-pairing partner of the extrusion target undergoes a series of in-plane rotations to facilitate the base-plane-elongation dynamics. A base-plane rotation angle is identified to be a possible reaction coordinate to represent

  19. A General Hybrid Radiation Transport Scheme for Star Formation Simulations on an Adaptive Grid

    NASA Astrophysics Data System (ADS)

    Klassen, Mikhail; Kuiper, Rolf; Pudritz, Ralph E.; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-01

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  20. Generalized Langevin models of molecular dynamics simulations with applications to ion channels

    NASA Astrophysics Data System (ADS)

    Gordon, Dan; Krishnamurthy, Vikram; Chung, Shin-Ho

    2009-10-01

    We present a new methodology, which combines molecular dynamics and stochastic dynamics, for modeling the permeation of ions across biological ion channels. Using molecular dynamics, a free energy profile is determined for the ion(s) in the channel, and the distribution of random and frictional forces is measured over discrete segments of the ion channel. The parameters thus determined are used in stochastic dynamics simulations based on the nonlinear generalized Langevin equation. We first provide the theoretical basis of this procedure, which we refer to as "distributional molecular dynamics," and detail the methods for estimating the parameters from molecular dynamics to be used in stochastic dynamics. We test the technique by applying it to study the dynamics of ion permeation across the gramicidin pore. Given the known difficulty in modeling the conduction of ions in gramicidin using classical molecular dynamics, there is a degree of uncertainty regarding the validity of the MD-derived potential of mean force (PMF) for gramicidin. Using our techniques and systematically changing the PMF, we are able to reverse engineer a modified PMF which gives a current-voltage curve closely matching experimental results.

  1. Stationary eddies in the Mars general circulation as simulated by the NASA-Ames GCM

    NASA Technical Reports Server (NTRS)

    Barnes, J. R.; Pollack, J. B.; Haberle, Robert M.

    1993-01-01

    Quasistationary eddies are prominent in a large set of simulations of the Mars general circulation performed with the NASA-Ames GCM. Various spacecraft observations have at least hinted at the existence of such eddies in the Mars atmosphere. The GCM stationary eddies appear to be forced primarily by the large Mars topography, and (to a much lesser degree) by spatial variations in the surface albedo and thermal inertia. The stationary eddy circulations exhibit largest amplitudes at high altitudes (above 30-40 km) in the winter extratropical regions. In these regions they are of planetary scale, characterized largely by zonal wavenumbers 1 and 2. Southern Hemisphere winter appears to be dominated by a very strong wave 1 pattern, with both waves 1 and 2 being prominent in the Northern Hemisphere winter regime. This difference seems to be basically understandable in terms of differences in the topography in the two hemispheres. The stationary eddies in the northern winter extratropics are found to increase in amplitude with dust loading. This behavior appears to be at least partly associated with changes in the structure of the zonal-mean flow that favor a greater response to wave 1 topographic forcing. There are also strong stationary eddy circulations in the tropics and in the summer hemisphere. The eddies in the summer subtropics and extratropics arc substantially stronger in southern summer than in northern summer. The summer hemisphere stationary circulations are relatively shallow and are characterized by smaller zonal scales than those in the winter extratropics.

  2. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  3. General circulation model simulations of recent cooling in the east-central United States

    NASA Astrophysics Data System (ADS)

    Robinson, Walter A.; Reudy, Reto; Hansen, James E.

    2002-12-01

    In ensembles of retrospective general circulation model (GCM) simulations, surface temperatures in the east-central United States cool between 1951 and 1997. This cooling, which is broadly consistent with observed surface temperatures, is present in GCM experiments driven by observed time varying sea-surface temperatures (SSTs) in the tropical Pacific, whether or not increasing greenhouse gases and other time varying climate forcings are included. Here we focus on ensembles with fixed radiative forcing and with observed varying SST in different regions. In these experiments the trend and variability in east-central U.S. surface temperatures are tied to tropical Pacific SSTs. Warm tropical Pacific SSTs cool U.S. temperatures by diminishing solar heating through an increase in cloud cover. These associations are embedded within a year-round response to warm tropical Pacific SST that features tropospheric warming throughout the tropics and regions of tropospheric cooling in midlatitudes. Precipitable water vapor over the Gulf of Mexico and the Caribbean and the tropospheric thermal gradient across the Gulf Coast of the United States increase when the tropical Pacific is warm. In observations, recent warming in the tropical Pacific is also associated with increased precipitable water over the southeast United States. The observed cooling in the east-central United States, relative to the rest of the globe, is accompanied by increased cloud cover, though year-to-year variations in cloud cover, U.S. surface temperatures, and tropical Pacific SST are less tightly coupled in observations than in the GCM.

  4. A general hybrid radiation transport scheme for star formation simulations on an adaptive grid

    SciTech Connect

    Klassen, Mikhail; Pudritz, Ralph E.; Kuiper, Rolf; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-10

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  5. Theoretical analysis and simulations of the generalized Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Malcai, Ofer; Biham, Ofer; Richmond, Peter; Solomon, Sorin

    2002-09-01

    The dynamics of generalized Lotka-Volterra systems is studied by theoretical techniques and computer simulations. These systems describe the time evolution of the wealth distribution of individuals in a society, as well as of the market values of firms in the stock market. The individual wealths or market values are given by a set of time dependent variables wi, i=1,...,N. The equations include a stochastic autocatalytic term (representing investments), a drift term (representing social security payments), and a time dependent saturation term (due to the finite size of the economy). The wi's turn out to exhibit a power-law distribution of the form P(w)~w-1-α. It is shown analytically that the exponent α can be expressed as a function of one parameter, which is the ratio between the constant drift component (social security) and the fluctuating component (investments). This result provides a link between the lower and upper cutoffs of this distribution, namely, between the resources available to the poorest and those available to the richest in a given society. The value of α is found to be insensitive to variations in the saturation term, which represent the expansion or contraction of the economy. The results are of much relevance to empirical studies that show that the distribution of the individual wealth in different countries during different periods in the 20th century has followed a power-law distribution with 1<α<2.

  6. Comparison of disjunctive kriging to generalized probability kriging in application to the estimation of simulated and real data

    SciTech Connect

    Carr, J.R. . Dept. of Geological Sciences); Mao, Nai-hsien )

    1992-01-01

    Disjunctive kriging has been compared previously to multigaussian kriging and indicator cokriging for estimation of cumulative distribution functions; it has yet to be compared extensively to probability kriging. Herein, disjunctive kriging and generalized probability kriging are applied to one real and one simulated data set and compared for estimation of the cumulative distribution functions. Generalized probability kriging is an extension, based on generalized cokriging theory, of simple probability kriging for the estimation of the indicator and uniform transforms at each cutoff, Z{sub k}. The disjunctive kriging and the generalized probability kriging give similar results for simulated data of normal distribution, but differ considerably for real data set with non-normal distribution.

  7. General Relativistic Magnetohydrodynamic Simulations of Magnetically Choked Accretion Flows around Black Holes

    SciTech Connect

    McKinney, Jonathan C.; Tchekhovskoy, Alexander; Blandford, Roger D.

    2012-04-26

    Black hole (BH) accretion flows and jets are qualitatively affected by the presence of ordered magnetic fields. We study fully three-dimensional global general relativistic magnetohydrodynamic (MHD) simulations of radially extended and thick (height H to cylindrical radius R ratio of |H/R| {approx} 0.2-1) accretion flows around BHs with various dimensionless spins (a/M, with BH mass M) and with initially toroidally-dominated ({phi}-directed) and poloidally-dominated (R-z directed) magnetic fields. Firstly, for toroidal field models and BHs with high enough |a/M|, coherent large-scale (i.e. >> H) dipolar poloidal magnetic flux patches emerge, thread the BH, and generate transient relativistic jets. Secondly, for poloidal field models, poloidal magnetic flux readily accretes through the disk from large radii and builds-up to a natural saturation point near the BH. While models with |H/R| {approx} 1 and |a/M| {le} 0.5 do not launch jets due to quenching by mass infall, for sufficiently high |a/M| or low |H/R| the polar magnetic field compresses the inflow into a geometrically thin highly non-axisymmetric 'magnetically choked accretion flow' (MCAF) within which the standard linear magneto-rotational instability is suppressed. The condition of a highly-magnetized state over most of the horizon is optimal for the Blandford-Znajek mechanism that generates persistent relativistic jets with and 100% efficiency for |a/M| {approx}> 0.9. A magnetic Rayleigh-Taylor and Kelvin-Helmholtz unstable magnetospheric interface forms between the compressed inflow and bulging jet magnetosphere, which drives a new jet-disk oscillation (JDO) type of quasi-periodic oscillation (QPO) mechanism. The high-frequency QPO has spherical harmonic |m| = 1 mode period of {tau} {approx} 70GM/c{sup 3} for a/M {approx} 0.9 with coherence quality factors Q {approx}> 10. Overall, our models are qualitatively distinct from most prior MHD simulations (typically, |H/R| << 1 and poloidal flux is limited by

  8. Simulating Titan's methane cycle with the TitanWRF General Circulation Model

    NASA Astrophysics Data System (ADS)

    Newman, Claire E.; Richardson, Mark I.; Lian, Yuan; Lee, Christopher

    2016-03-01

    Observations provide increasing evidence of a methane hydrological cycle on Titan. Earth-based and Cassini-based monitoring has produced data on the seasonal variation in cloud activity and location, with clouds being observed at increasingly low latitudes as Titan moved out of southern summer. Lakes are observed at high latitudes, with far larger lakes and greater areal coverage in the northern hemisphere, where some shorelines extend down as far as 50°N. Rainfall at some point in the past is suggested by the pattern of flow features on the surface at the Huygens landing site, while recent rainfall is suggested by surface change. As with the water cycle on Earth, the methane cycle on Titan is both impacted by tropospheric dynamics and likely able to impact this circulation via feedbacks. Here we use the 3D TitanWRF General Circulation Model (GCM) to simulate Titan's methane cycle. In this initial work we use a simple large-scale condensation scheme with latent heat feedbacks and a finite surface reservoir of methane, and focus on large-scale dynamical interactions between the atmospheric circulation and methane, and how these impact seasonal changes and the long term (steady state) behavior of the methane cycle. We note five major conclusions: (1) Condensation and precipitation in the model is sporadic in nature, with interannual variability in its timing and location, but tends to occur in association with both (a) frequent strong polar upwelling during spring and summer in each hemisphere, and (b) the Inter-Tropical Convergence Zone (ITCZ), a region of increased convergence and upwelling due to the seasonally shifting Hadley cells. (2) An active tropospheric methane cycle affects the stratospheric circulation, slightly weakening the stratospheric superrotation produced. (3) Latent heating feedback strongly influences surface and near-surface temperatures, narrowing the latitudinal range of the ITCZ, and changing the distribution - and generally weakening the

  9. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite. PMID:25309418

  10. A Generalized Fluid System Simulation Program to Model Flow Distribution in Fluid Networks

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Bailey, John W.; Schallhorn, Paul; Steadman, Todd

    1998-01-01

    This paper describes a general purpose computer program for analyzing steady state and transient flow in a complex network. The program is capable of modeling phase changes, compressibility, mixture thermodynamics and external body forces such as gravity and centrifugal. The program's preprocessor allows the user to interactively develop a fluid network simulation consisting of nodes and branches. Mass, energy and specie conservation equations are solved at the nodes; the momentum conservation equations are solved in the branches. The program contains subroutines for computing "real fluid" thermodynamic and thermophysical properties for 33 fluids. The fluids are: helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, parahydrogen, water, kerosene (RP-1), isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, R-11, R-12, R-22, R-32, R-123, R-124, R-125, R-134A, R-152A, nitrogen trifluoride and ammonia. The program also provides the options of using any incompressible fluid with constant density and viscosity or ideal gas. Seventeen different resistance/source options are provided for modeling momentum sources or sinks in the branches. These options include: pipe flow, flow through a restriction, non-circular duct, pipe flow with entrance and/or exit losses, thin sharp orifice, thick orifice, square edge reduction, square edge expansion, rotating annular duct, rotating radial duct, labyrinth seal, parallel plates, common fittings and valves, pump characteristics, pump power, valve with a given loss coefficient, and a Joule-Thompson device. The system of equations describing the fluid network is solved by a hybrid numerical method that is a combination of the Newton-Raphson and successive substitution methods. This paper also illustrates the application and verification of the code by comparison with Hardy Cross method for steady state flow and analytical solution for unsteady flow.

  11. Use of a PhET Interactive Simulation in General Chemistry Laboratory: Models of the Hydrogen Atom

    ERIC Educational Resources Information Center

    Clark, Ted M.; Chamberlain, Julia M.

    2014-01-01

    An activity supporting the PhET interactive simulation, Models of the Hydrogen Atom, has been designed and used in the laboratory portion of a general chemistry course. This article describes the framework used to successfully accomplish implementation on a large scale. The activity guides students through a comparison and analysis of the six…

  12. Accuracy of highly sexually active gay and bisexual men's predictions of their daily likelihood of anal sex and its relevance for intermittent event-driven HIV Pre-Exposure Prophylaxis

    PubMed Central

    Parsons, Jeffrey T.; Rendina, H. Jonathon; Grov, Christian; Ventuneac, Ana; Mustanski, Brian

    2014-01-01

    Objective We sought to examine highly sexually active gay and bisexual men's accuracy in predicting their sexual behavior for the purposes of informing future research on intermittent, event-driven HIV Pre-Exposure Prophylaxis (PrEP). Design For 30 days, 92 HIV-negative men completed a daily survey about their sexual behavior (n = 1,688 days of data) and indicated their likelihood of having anal sex with a casual male partner the following day. Method We utilized multilevel modeling to analyze the association between self-reported likelihood of and subsequent engagement in anal sex. Results We found a linear association between men's reported likelihood of anal sex with casual partners and the actual probability of engaging in sex, though men overestimated the likelihood of sex. Overall, we found that men were better at predicting when they would not have sex than when they would, particularly if any likelihood value greater than 0% was treated as indicative that sex might occur. We found no evidence that men's accuracy of prediction was affected by whether it was a weekend or whether they were using substances, though both did increase the probability of sex. Discussion These results suggested that, were men taking event-driven intermittent PrEP, 14% of doses could have been safely skipped with a minimal rate of false negatives using guidelines of taking a dose unless there was no chance (i.e., 0% likelihood) of sex on the following day. This would result in a savings of over $1,300 per year in medication costs per participant. PMID:25559594

  13. General Agreement on Tariff and Trade Negotiations: A Computer-Based Simulation.

    ERIC Educational Resources Information Center

    Manrique, Gabriel G.

    This paper recommends the use of a computer simulation about trade and tariff negotiations to reinforce and apply principles learned in undergraduate international trade courses and to provide students with an opportunity to use the advanced features of Symphony, a computer spreadsheet. This simulation is a game in which both the class and…

  14. Axisymmetric general relativistic simulations of the accretion-induced collapse of white dwarfs

    SciTech Connect

    Abdikamalov, E. B.; Ott, C. D.; Rezzolla, L.; Dessart, L.; Dimmelmeier, H.; Marek, A.; Janka, H.-T.

    2010-02-15

    The accretion-induced collapse (AIC) of a white dwarf may lead to the formation of a protoneutron star and a collapse-driven supernova explosion. This process represents a path alternative to thermonuclear disruption of accreting white dwarfs in type Ia supernovae. In the AIC scenario, the supernova explosion energy is expected to be small and the resulting transient short-lived, making it hard to detect by electromagnetic means alone. Neutrino and gravitational-wave (GW) observations may provide crucial information necessary to reveal a potential AIC. Motivated by the need for systematic predictions of the GW signature of AIC, we present results from an extensive set of general-relativistic AIC simulations using a microphysical finite-temperature equation of state and an approximate treatment of deleptonization during collapse. Investigating a set of 114 progenitor models in axisymmetric rotational equilibrium, with a wide range of rotational configurations, temperatures and central densities, and resulting white dwarf masses, we extend previous Newtonian studies and find that the GW signal has a generic shape akin to what is known as a 'type III' signal in the literature. Despite this reduction to a single type of waveform, we show that the emitted GWs carry information that can be used to constrain the progenitor and the postbounce rotation. We discuss the detectability of the emitted GWs, showing that the signal-to-noise ratio for current or next-generation interferometer detectors could be high enough to detect such events in our Galaxy. Furthermore, we contrast the GW signals of AIC and rotating massive star iron core collapse and find that they can be distinguished, but only if the distance to the source is known and a detailed reconstruction of the GW time series from detector data is possible. Some of our AIC models form massive quasi-Keplerian accretion disks after bounce. The disk mass is very sensitive to progenitor mass and angular momentum

  15. Simulated surgery in the summative assessment of general practice training: results of a trial in the Trent and Yorkshire regions.

    PubMed Central

    Allen, J; Evans, A; Foulkes, J; French, A

    1998-01-01

    BACKGROUND: General practice registrars are now required to undertake a summative assessment of their consulting skills. Simulated surgeries have been developed as an alternative to the existing method of assessing video-recorded consultations. AIM: To evaluate the simulated surgery assessment method, developed in the General Practice Postgraduate Education Department in Leicester, for use in assessing general practice consultation skills. METHOD: General practice registrars in Leicester performed two eight-patient simulated surgeries separated by four weeks. Assessment outcomes were compared to demonstrate the consistency of the method. Pilot surgeries in Yorkshire were videotaped, and then rated by video-raters trained for summative assessment. RESULTS: The method consistently identified those registrars who were competent and those who were not yet competent in consulting skills. It proved acceptable to candidate doctors and has fewer resource requirements for both examiners and candidates than other consulting skills assessment methods. CONCLUSION: The method developed in Leicester and successfully transferred to Yorkshire is feasible on a large scale, and offers an acceptable alternative to other consulting skills assessment methods. In this study it consistently identified competent from incompetent candidate doctors. PMID:9692278

  16. General order parameter based correlation analysis of protein backbone motions between experimental NMR relaxation measurements and molecular dynamics simulations

    SciTech Connect

    Liu, Qing; Shi, Chaowei; Yu, Lu; Zhang, Longhua; Xiong, Ying; Tian, Changlin

    2015-02-13

    Internal backbone dynamic motions are essential for different protein functions and occur on a wide range of time scales, from femtoseconds to seconds. Molecular dynamic (MD) simulations and nuclear magnetic resonance (NMR) spin relaxation measurements are valuable tools to gain access to fast (nanosecond) internal motions. However, there exist few reports on correlation analysis between MD and NMR relaxation data. Here, backbone relaxation measurements of {sup 15}N-labeled SH3 (Src homology 3) domain proteins in aqueous buffer were used to generate general order parameters (S{sup 2}) using a model-free approach. Simultaneously, 80 ns MD simulations of SH3 domain proteins in a defined hydrated box at neutral pH were conducted and the general order parameters (S{sup 2}) were derived from the MD trajectory. Correlation analysis using the Gromos force field indicated that S{sup 2} values from NMR relaxation measurements and MD simulations were significantly different. MD simulations were performed on models with different charge states for three histidine residues, and with different water models, which were SPC (simple point charge) water model and SPC/E (extended simple point charge) water model. S{sup 2} parameters from MD simulations with charges for all three histidines and with the SPC/E water model correlated well with S{sup 2} calculated from the experimental NMR relaxation measurements, in a site-specific manner. - Highlights: • Correlation analysis between NMR relaxation measurements and MD simulations. • General order parameter (S{sup 2}) as common reference between the two methods. • Different protein dynamics with different Histidine charge states in neutral pH. • Different protein dynamics with different water models.

  17. Nonequilibrium and generalized-ensemble molecular dynamics simulations for amyloid fibril

    SciTech Connect

    Okumura, Hisashi

    2015-12-31

    Amyloids are insoluble and misfolded fibrous protein aggregates and associated with more than 20 serious human diseases. We perform all-atom molecular dynamics simulations of amyloid fibril assembly and disassembly.

  18. Nonequilibrium and generalized-ensemble molecular dynamics simulations for amyloid fibril

    NASA Astrophysics Data System (ADS)

    Okumura, Hisashi

    2015-12-01

    Amyloids are insoluble and misfolded fibrous protein aggregates and associated with more than 20 serious human diseases. We perform all-atom molecular dynamics simulations of amyloid fibril assembly and disassembly.

  19. A general kinetic-flow coupling model for FCC riser flow simulation.

    SciTech Connect

    Chang, S. L.

    1998-05-18

    A computational fluid dynamic (CFD) code has been developed for fluid catalytic cracking (FCC) riser flow simulation. Depending on the application of interest, a specific kinetic model is needed for the FCC flow simulation. This paper describes a method to determine a kinetic model based on limited pilot-scale test data. The kinetic model can then be used with the CFD code as a tool to investigate optimum operating condition ranges for a specific FCC unit.

  20. A Variable Resolution Stretched Grid General Circulation Model: Regional Climate Simulation

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.; Suarez, Max J.

    2000-01-01

    The development of and results obtained with a variable resolution stretched-grid GCM for the regional climate simulation mode, are presented. A global variable resolution stretched- grid used in the study has enhanced horizontal resolution over the U.S. as the area of interest The stretched-grid approach is an ideal tool for representing regional to global scale interaction& It is an alternative to the widely used nested grid approach introduced over a decade ago as a pioneering step in regional climate modeling. The major results of the study are presented for the successful stretched-grid GCM simulation of the anomalous climate event of the 1988 U.S. summer drought- The straightforward (with no updates) two month simulation is performed with 60 km regional resolution- The major drought fields, patterns and characteristics such as the time averaged 500 hPa heights precipitation and the low level jet over the drought area. appear to be close to the verifying analyses for the stretched-grid simulation- In other words, the stretched-grid GCM provides an efficient down-scaling over the area of interest with enhanced horizontal resolution. It is also shown that the GCM skill is sustained throughout the simulation extended to one year. The developed and tested in a simulation mode stretched-grid GCM is a viable tool for regional and subregional climate studies and applications.

  1. A general spectral method for the numerical simulation of one-dimensional interacting fermions

    NASA Astrophysics Data System (ADS)

    Clason, Christian; von Winckel, Gregory

    2012-08-01

    This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical

  2. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    PubMed Central

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  3. Ensemble climate simulations using a fully coupled ocean-troposphere-stratosphere general circulation model.

    PubMed

    Huebener, H; Cubasch, U; Langematz, U; Spangehl, T; Niehörster, F; Fast, I; Kunze, M

    2007-08-15

    Long-term transient simulations are carried out in an initial condition ensemble mode using a global coupled climate model which includes comprehensive ocean and stratosphere components. This model, which is run for the years 1860-2100, allows the investigation of the troposphere-stratosphere interactions and the importance of representing the middle atmosphere in climate-change simulations. The model simulates the present-day climate (1961-2000) realistically in the troposphere, stratosphere and ocean. The enhanced stratospheric resolution leads to the simulation of sudden stratospheric warmings; however, their frequency is underestimated by a factor of 2 with respect to observations.In projections of the future climate using the Intergovernmental Panel on Climate Change special report on emissions scenarios A2, an increased tropospheric wave forcing counteracts the radiative cooling in the middle atmosphere caused by the enhanced greenhouse gas concentration. This leads to a more dynamically active, warmer stratosphere compared with present-day simulations, and to the doubling of the number of stratospheric warmings. The associated changes in the mean zonal wind patterns lead to a southward displacement of the Northern Hemisphere storm track in the climate-change signal. PMID:17569652

  4. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    PubMed

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  5. The simulated features of heliospheric cosmic-ray modulation with a time-dependent drift model. III - General energy dependence

    NASA Technical Reports Server (NTRS)

    Potgieter, M. S.; Le Roux, J. A.

    1992-01-01

    The time-dependent cosmic-ray transport equation is solved numerically in an axially symmetric heliosphere. Gradient and curvature drifts are incorporated, together with an emulated wavy neutral sheet. This model is used to simulate heliospheric cosmic-ray modulation for the period 1985-1989 during which drifts are considered to be important. The general energy dependence of the modulation of Galactic protons is studied as predicted by the model for the energy range 1 MeV to 10 GeV. The corresponding instantaneous radial and latitudinal gradients are calculated, and it is found that, whereas the latitudinal gradients follow the trends in the waviness of the neutral sheet to a large extent for all energies, the radial gradients below about 200 MeV deviate from this general pattern. In particular, these gradients increase when the waviness decreases for the simulated period 1985-1987.3, after which they again follow the neutral sheet by increasing rapidly.

  6. A study of nucleation and growth of thin films by means of computer simulation: General features

    NASA Technical Reports Server (NTRS)

    Salik, J.

    1984-01-01

    Some of the processes involved in the nucleation and growth of thin films were simulated by means of a digital computer. The simulation results were used to study the nucleation and growth kinetics resulting from the various processes. Kinetic results obtained for impingement, surface migration, impingement combined with surface migration, and with reevaporation are presented. A substantial fraction of the clusters may form directly upon impingement. Surface migration results in a decrease in cluster density, and reevaporation of atoms from the surface causes a further reduction in cluster density.

  7. Zonal wavenumber three traveling waves in the northern hemisphere of Mars simulated with a general circulation model

    NASA Astrophysics Data System (ADS)

    Wang, Huiqun; Richardson, Mark I.; Toigo, Anthony D.; Newman, Claire E.

    2013-04-01

    Observations suggest a strong correlation between curvilinear shaped traveling dust storms (observed in wide angle camera images) and eastward traveling zonal wave number m = 3 waves (observed in thermal data) in the northern mid and high latitudes during the fall and winter. Using the MarsWRF General Circulation Model, we have investigated the seasonality, structure and dynamics of the simulated m = 3 traveling waves and tested the hypothesis that traveling dust storms may enhance m = 3 traveling waves under certain conditions. Our standard simulation using a prescribed "MGS dust scenario" can capture the observed major wave modes and strong near surface temperature variations before and after the northern winter solstice. The same seasonal pattern is also shown by the simulated near surface meridional wind, but not by the normalized surface pressure. The simulated eastward traveling 1.4 < T < 10 sol m = 3 waves are confined near the surface in terms of the temperature perturbation, EP flux and eddy available potential energy, and they extend higher in terms of the eddy winds and eddy kinetic energy. The signature of the simulated m = 3 traveling waves is stronger in the near surface meridional wind than in the near surface temperature field. Compared with the standard simulation, our test simulations show that the prescribed m = 3 traveling dust blobs can enhance the simulated m = 3 traveling waves during the pre- and post-solstice periods when traveling dust storms are frequently observed in images, and that they have negligible effect during the northern winter solstice period when traveling dust storms are absent. The enhancement is even greater in our simulation when dust is concentrated closer to the surface. Our simulations also suggest that dust within the 45-75°N band is most effective at enhancing the simulated m = 3 traveling waves. There are multiple factors influencing the strength of the simulated m = 3 traveling waves. Among those, our study

  8. Generalized non-equilibrium vertex correction method in coherent medium theory for quantum transport simulation of disordered nanoelectronics

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Ke, Youqi

    In realistic nanoelectronics, disordered impurities/defects are inevitable and play important roles in electron transport. However, due to the lack of effective quantum transport method, the important effects of disorders remain poorly understood. Here, we report a generalized non-equilibrium vertex correction (NVC) method with coherent potential approximation to treat the disorder effects in quantum transport simulation. With this generalized NVC method, any averaged product of two single-particle Green's functions can be obtained by solving a set of simple linear equations. As a result, the averaged non-equilibrium density matrix and various important transport properties, including averaged current, disordered induced current fluctuation and the averaged shot noise, can all be efficiently computed in a unified scheme. Moreover, a generalized form of conditionally averaged non-equilibrium Green's function is derived to incorporate with density functional theory to enable first-principles simulation. We prove the non-equilibrium coherent potential equals the non-equilibrium vertex correction. Our approach provides a unified, efficient and self-consistent method for simulating non-equilibrium quantum transport through disorder nanoelectronics. Shanghaitech start-up fund.

  9. The global distribution of natural tritium in precipitation simulated with an Atmospheric General Circulation Model and comparison with observations

    NASA Astrophysics Data System (ADS)

    Cauquoin, A.; Jean-Baptiste, P.; Risi, C.; Fourré, É.; Stenni, B.; Landais, A.

    2015-10-01

    The description of the hydrological cycle in Atmospheric General Circulation Models (GCMs) can be validated using water isotopes as tracers. Many GCMs now simulate the movement of the stable isotopes of water, but here we present the first GCM simulations modelling the content of natural tritium in water. These simulations were obtained using a version of the LMDZ General Circulation Model enhanced by water isotopes diagnostics, LMDZ-iso. To avoid tritium generated by nuclear bomb testing, the simulations have been evaluated against a compilation of published tritium datasets dating from before 1950, or measured recently. LMDZ-iso correctly captures the observed tritium enrichment in precipitation as oceanic air moves inland (the so-called continental effect) and the observed north-south variations due to the latitudinal dependency of the cosmogenic tritium production rate. The seasonal variability, linked to the stratospheric intrusions of air masses with higher tritium content into the troposphere, is correctly reproduced for Antarctica with a maximum in winter. LMDZ-iso reproduces the spring maximum of tritium over Europe, but underestimates it and produces a peak in winter that is not apparent in the data. This implementation of tritium in a GCM promises to provide a better constraint on: (1) the intrusions and transport of air masses from the stratosphere, and (2) the dynamics of the modelled water cycle. The method complements the existing approach of using stable water isotopes.

  10. Accurate and general treatment of electrostatic interaction in Hamiltonian adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.

    2016-07-01

    In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.

  11. Generalized methodology for modeling and simulating optical interconnection networks using diffraction analysis

    NASA Astrophysics Data System (ADS)

    Louri, Ahmed; Major, Michael C.

    1995-07-01

    Research in the field of free-space optical interconnection networks has reached a point where simula-tors and other design tools are desirable for reducing development costs and for improving design time. Previously proposed methodologies have only been applicable to simple systems. Our goal was to develop a simulation methodology capable of evaluating the performance characteristics for a variety of different free-space networks under a range of different configurations and operating states. The proposed methodology operates by first establishing the optical signal powers at various locations in the network. These powers are developed through the simulation by diffraction analysis of the light propagation through the network. After this evaluation, characteristics such as bit-error rate, signal-to-noise ratio, and system bandwidth are calculated. Further, the simultaneous evaluation of this process for a set of component misalignments provides a measure of the alignment tolerance of a design. We discuss this simulation process in detail as well as provide models for different optical interconnection network components.

  12. The Simulation of Stationary and Transient Geopotential-Height Eddies in January and July with a Spectral General Circulation Model.

    NASA Astrophysics Data System (ADS)

    Malone, Robert C.; Pitcher, Eric J.; Blackmon, Maurice L.; Puri, Kamal; Bourke, William

    1984-04-01

    We examine the characteristics of stationary and transient eddies in the geopotential-height field as simulated by a spectral general circulation model. The model possesses a realistic distribution of continents and oceans and realistic, but smoothed, topography. Two simulations with perpetual January and July forcing by climatological sea surface temperatures, sea ice, and insulation were extended to 1200 days, of which the final 600 days were used for the results in this study.We find that the stationary waves are well simulated in both seasons in the Northern Hemisphere, where strong forcing by orography and land-sea thermal contrasts exists. However, in the Southern Hemisphere, where no continents are present in midlatitudes, the stationary waves have smaller amplitude than that observed in both seasons.In both hemispheres, the transient eddies are well simulated in the winter season but are too weak in the summer season. The model fails to generate a sufficiently intense summertime midlatitude jet in either hemisphere, and this results in a low level of transient activity. The variance in the tropical troposphere is very well simulated. We examine the geographical distribution and vertical structure of the transient eddies. Fourier analysis in zonal wavenumber and temporal filtering am used to display the wavelength and frequency characteristics of the eddies.

  13. Generalized Wind Turbine Actuator Disk Parameterization in the Weather Research and Forecasting (WRF) Model for Real-World Simulations

    NASA Astrophysics Data System (ADS)

    Marjanovic, N.; Mirocha, J. D.; Chow, F. K.

    2013-12-01

    In this work, we examine the performance of a generalized actuator disk (GAD) model embedded within the Weather Research and Forecasting (WRF) atmospheric model to study wake effects on successive rows of turbines at a North American wind farm. These wake effects are of interest as they can drastically reduce down-wind energy extraction and increase turbulence intensity. The GAD, which is designed for turbulence-resolving simulations, is used within downscaled large-eddy simulations (LES) forced with mesoscale simulations and WRF's grid nesting capability. The GAD represents the effects of thrust and torque created by a wind turbine on the atmosphere within a disk representing the rotor swept area. The lift and drag forces acting on the turbine blades are parameterized using blade-element theory and the aerodynamic properties of the blades. Our implementation permits simulation of turbine wake effects and turbine/airflow interactions within a realistic atmospheric boundary layer flow field, including resolved turbulence, time-evolving mesoscale forcing, and real topography. The GAD includes real-time yaw and pitch control to respond realistically to changing flow conditions. Simulation results are compared to SODAR data from operating wind turbines and an already existing WRF mesoscale turbine drag parameterization to validate the GAD parameterization.

  14. Earth radiation budget and cloudiness simulations with a general circulation model

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN; Randall, David A.; Corsetti, Thomas G.; Dazlich, Donald A.

    1989-01-01

    A GCM with new parameterizations of solar and terrestrial radiation, parameterized cloud optical properties, and a simple representation of the cloud liquid water feedback is used with several observational data sets to analyze the effects of cloudiness on the earth's radiation budget. The January and July results from the model are in reasonable agreement with data from Nimbus-7. It is found that the simulated cloudiness overpredicts subtropical and midlatitude cloudiness. The simulated atmospheric cloud radiative forcing is examined. The clear-sky radiation fields obtained by two methods of Cess and Potter (1987) are compared. Also, a numerical experiment was performed to determine the effects of the water vapor continuum on the model results.

  15. Developing extensible lattice-Boltzmann simulators for general-purpose graphics-processing units

    SciTech Connect

    Walsh, S C; Saar, M O

    2011-12-21

    Lattice-Boltzmann methods are versatile numerical modeling techniques capable of reproducing a wide variety of fluid-mechanical behavior. These methods are well suited to parallel implementation, particularly on the single-instruction multiple data (SIMD) parallel processing environments found in computer graphics processing units (GPUs). Although more recent programming tools dramatically improve the ease with which GPU programs can be written, the programming environment still lacks the flexibility available to more traditional CPU programs. In particular, it may be difficult to develop modular and extensible programs that require variable on-device functionality with current GPU architectures. This paper describes a process of automatic code generation that overcomes these difficulties for lattice-Boltzmann simulations. It details the development of GPU-based modules for an extensible lattice-Boltzmann simulation package - LBHydra. The performance of the automatically generated code is compared to equivalent purpose written codes for both single-phase, multiple-phase, and multiple-component flows. The flexibility of the new method is demonstrated by simulating a rising, dissolving droplet in a porous medium with user generated lattice-Boltzmann models and subroutines.

  16. General Relativistic Magnetohydrodynamic Simulations of Jet Formation with a Thin Keplerian Disk

    NASA Technical Reports Server (NTRS)

    Mizuno, Yosuke; Nishikawa, Ken-Ichi; Koide, Shinji; Hardee, Philip; Gerald, J. Fishman

    2006-01-01

    We have performed several simulations of black hole systems (non-rotating, black hole spin parameter a = 0.0 and rapidly rotating, a = 0.95) with a geometrically thin Keplerian disk using the newly developed RAISHIN code. The simulation results show the formation of jets driven by the Lorentz force and the gas pressure gradient. The jets have mildly relativistic speed (greater than or equal to 0.4 c). The matter is continuously supplied from the accretion disk and the jet propagates outward until each applicable terminal simulation time (non-rotating: t/tau S = 275 and rotating: t/tau S = 200, tau s equivalent to r(sub s/c). It appears that a rotating black hole creates an additional, faster, and more collimated inner outflow (greater than or equal to 0.5 c) formed and accelerated by the twisted magnetic field resulting from frame-dragging in the black hole ergosphere. This new result indicates that jet kinematic structure depends on black hole rotation.

  17. Generalized Simulation Model for a Switched-Mode Power Supply Design Course Using MATLAB/SIMULINK

    ERIC Educational Resources Information Center

    Liao, Wei-Hsin; Wang, Shun-Chung; Liu, Yi-Hua

    2012-01-01

    Switched-mode power supplies (SMPS) are becoming an essential part of many electronic systems as the industry drives toward miniaturization and energy efficiency. However, practical SMPS design courses are seldom offered. In this paper, a generalized MATLAB/SIMULINK modeling technique is first presented. A proposed practical SMPS design course at…

  18. Simulation of the Low-Level-Jet by general circulation models

    SciTech Connect

    Ghan, S.J.

    1996-04-01

    To what degree is the low-level jet climatology and it`s impact on clouds and precipitation being captured by current general circulation models? It is hypothesised that a need for a pramaterization exists. This paper describes this parameterization need.

  19. The Simulation of Coriolis Meter Response to Pulsating Flow Using a General Purpose F.E. Code

    NASA Astrophysics Data System (ADS)

    Belhadj, A.; Cheesewright, R.; Clark, C.

    2000-07-01

    The publication of a theoretical analysis of the response of a simple straight-tube Coriolis meter to flow pulsations raised the question of the extent to which the results of that analysis are generic over the wide range of geometric configurations used in commercially available meters. A procedure for using a general purpose finite element (FE) code to investigate this question is presented. The dual time scales, which are an essential feature of pulsating flow through a Coriolis meter, are used to minimize the amount of computation required to simulate the meter response. The FE model is developed in a full 3-D form with shear deflection and axial forces, and the computation of the simulated response for the geometrically most complex meter currently available shows that this level of representation is necessary to reveal the full details of the response. The response derived from the FE simulation for straight-tube meters, is compared with the published theoretical response and to experimental data. Over a range of different meters, the characteristics of the sensor signals in the presence of flow pulsations are shown to be generally similar. In all cases, the simulated sensor signals contain components corresponding to beating between the pulsation frequency and the meter drive frequency, in addition to the main component at the drive frequency. Spectra are computed from the simulated meter responses and these are used to show that the relationship between the mass flow rate and the phase difference between the component of the sensor signals at the drive frequency, is not significantly affected by the pulsations. Thus, the work suggests that the reports of changes in meter calibration due to certain frequencies of flow pulsation represent errors in signal processing rather than fundamental changes in the meter characteristics.

  20. The atmospheric chemistry general circulation model ECHAM5/MESSy1: consistent simulation of ozone from the surface to the mesosphere

    NASA Astrophysics Data System (ADS)

    Jöckel, P.; Tost, H.; Pozzer, A.; Brühl, C.; Buchholz, J.; Ganzeveld, L.; Hoor, P.; Kerkweg, A.; Lawrence, M. G.; Sander, R.; Steil, B.; Stiller, G.; Tanarhte, M.; Taraborrelli, D.; van Aardenne, J.; Lelieveld, J.

    2006-11-01

    The new Modular Earth Submodel System (MESSy) describes atmospheric chemistry and meteorological processes in a modular framework, following strict coding standards. It has been coupled to the ECHAM5 general circulation model, which has been slightly modified for this purpose. A 90-layer model setup up to 0.01 hPa was used at spectral T42 resolution to simulate the lower and middle atmosphere. With the high vertical resolution the model simulates the Quasi-Biennial Oscillation. The model meteorology has been tested to check the influence of the changes to ECHAM5 and the radiation interactions with the new representation of atmospheric composition. In the simulations presented here a Newtonian relaxation technique was applied in the tropospheric part of the domain to weakly nudge the model towards the analysed meteorology during the period 1998-2005. This allows an efficient and direct evaluation with satellite and in-situ data. It is shown that the tropospheric wave forcing of the stratosphere in the model suffices to reproduce major stratospheric warming events leading e.g. to the vortex split over Antarctica in 2002. Characteristic features such as dehydration and denitrification caused by the sedimentation of polar stratospheric cloud particles and ozone depletion during winter and spring are simulated well, although ozone loss in the lower polar stratosphere is slightly underestimated. The model realistically simulates stratosphere-troposphere exchange processes as indicated by comparisons with satellite and in situ measurements. The evaluation of tropospheric chemistry presented here focuses on the distributions of ozone, hydroxyl radicals, carbon monoxide and reactive nitrogen compounds. In spite of minor shortcomings, mostly related to the relatively coarse T42 resolution and the neglect of inter-annual changes in biomass burning emissions, the main characteristics of the trace gas distributions are generally reproduced well. The MESSy submodels and the

  1. Generalized Yukawa PPPM for molecular dynamics simulation of strongly coupled plasmas

    NASA Astrophysics Data System (ADS)

    Dharuman, Gautham; Stanton, Liam; Glosli, James; Verboncoeur, John; Christlieb, Andrew; Murillo, Michael

    2015-11-01

    The Particle-Particle-Particle-Mesh (PPPM) method is an efficient way of treating the Ewald sum for long range interactions in a periodic system. It makes use of the Fast-Fourier-Transform algorithm that scales as O(N logN). In this work we have applied the PPPM method to long range interactions in the weak screening limit of generalized Yukawa interaction to identify the range of screening over which PPPM is computationally more efficient than the minimum image method which is usually usedfor the well-known Yukawa interactions. Generalized Yukawa interaction is obtained by including arbitrary linear dielectric screening in the Yukawa model. In the PPPM method the Fourier space part of the Ewald sum is treated by assigning charges to a mesh and computing the potential using an optimized Green's function that minimizes the discretization errors introduced in the forces.

  2. Numerical Implementation of a General Spinwave Model to Simulate Spinwave Excitations Found in Inelastic Neutron Scattering Data

    NASA Astrophysics Data System (ADS)

    Casavant, D.; Brodsky, I.; MacDougall, G. J.

    Many important details regarding magnetism in a material can be inferred from the magnetic excitation spectrum, and in this context, general calculations of the classical spinwave spectrum are often necessary. Beyond the simplest of lattices, however, it is difficult to numerically determine the full spinwave spectrum, due primarily to the non-linearity of the problem. In this talk, I will present MATLAB code, developed over the last few years at the University of Illinois, that calculates the dispersions of spinwave excitations out of an arbitrarily defined ordered spin system. The calculation assumes a standard Heisenberg exchange Hamiltonian with the incorporation of a single-ion anisotropy term which can be varied site-by-site and can also simulate the application of an applied field. An overview of the calculation method and the structure of the code will be given, with emphasis on its general applicability. Extensions to the code enable the simulation of both single-crystal and powder-averaged neutron scattering intensity patterns. As a specfic example, I will present the calculated neutron scattering spectrum for powders of CoV2O4, where good agreement between the simulated and experimental data suggests a self-consistent picture of the low-temperature magnetism.

  3. Thermal conductance of carbon nanotube contacts: Molecular dynamics simulations and general description of the contact conductance

    NASA Astrophysics Data System (ADS)

    Salaway, Richard N.; Zhigilei, Leonid V.

    2016-07-01

    The contact conductance of carbon nanotube (CNT) junctions is the key factor that controls the collective heat transfer through CNT networks or CNT-based materials. An improved understanding of the dependence of the intertube conductance on the contact structure and local environment is needed for predictive computational modeling or theoretical description of the effective thermal conductivity of CNT materials. To investigate the effect of local structure on the thermal conductance across CNT-CNT contact regions, nonequilibrium molecular dynamics (MD) simulations are performed for different intertube contact configurations (parallel fully or partially overlapping CNTs and CNTs crossing each other at different angles) and local structural environments characteristic of CNT network materials. The results of MD simulations predict a stronger CNT length dependence present over a broader range of lengths than has been previously reported and suggest that the effect of neighboring junctions on the conductance of CNT-CNT junctions is weak and only present when the CNTs that make up the junctions are within the range of direct van der Waals interaction with each other. A detailed analysis of the results obtained for a diverse range of intertube contact configurations reveals a nonlinear dependence of the conductance on the contact area (or number of interatomic intertube interactions) and suggests larger contributions to the conductance from areas of the contact where the density of interatomic intertube interactions is smaller. An empirical relation accounting for these observations and expressing the conductance of an arbitrary contact configuration through the total number of interatomic intertube interactions and the average number of interatomic intertube interactions per atom in the contact region is proposed. The empirical relation is found to provide a good quantitative description of the contact conductance for various CNT configurations investigated in the MD

  4. Nonhydrostatic Simulation of Frontogenesis in a Moist Atmosphere. Part I: General Description and Narrow Rainbands.

    NASA Astrophysics Data System (ADS)

    Bénard, P.; Redelsperger, J.-L.; Lafore, J.-P.

    1992-12-01

    A series of experiments using a two-dimensional, nonhydrostatic, numerical cloud model with fine horizontal and vertical resolution is performed with the Hoskins-Bretherton solution to the Eady problem as initial condition. Dry and wet simulations are presented with 5-, 10-, and 40-km horizontal resolutions and vertical resolution from 160 m at the ground to 330 m at the domain top. Sensitivity experiments on the initial Brunt-Väisälä frequency and vertical shear are also discussed.Two classes of narrow bands are identified: 1) A narrow cold-frontal rainband at the surface cold front, consisting of a line of shallow convection triggered by the frictionally induced instability in the boundary layer at the surface front. The associated precipitation is organized in a narrow line with a large rainfall rate. Latent heating due to condensation contributes in large part to the tilting of isentropes and to the increasing of the vertical jet strength. Sensitivity experiments show that both friction and condensation processes are important to simulate this jet. 2) Narrow free-atmosphere rainbands above the narrow cold-frontal band. A succession of updrafts and downdrafts are generated in the stable free atmosphere above the narrow cold-frontal rainband along the frontal surface. Weak precipitation is associated with these bands. The conditions for conditional convective instability and conditional symmetric instability are not met. Detailed analyses show that the linear theory of stationary and hydrostatic gravity waves gives a reasonable explanation of these bands. Simulations with different horizontal resolutions indicate that the horizontal wavelength is related to the width of the vertical jet.Two classes of wide rainbands are also obtained in particular regions of the frontal system. 1) Wide cold-frontal rainbands consisting of bands periodic in the frontal zone, with a 75-100 km scale and a lifetime of 6-9 hours. 2) A single warm-sector wide rainband, located in

  5. A general aviation simulator evaluation of a rate-enhanced instrument landing system display

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.

    1981-01-01

    A piloted-simulation study was conducted to evaluate the effect on instrument landing system tracking performance of integrating localizer-error rate with raw localizer and glide-slope error. The display was named the pseudocommand tracking indicator (PCTI) because it provides an indication of the change of heading required to track the localizer center line. Eight instrument-rated pilots each flew five instrument approaches with the PCTI and five instrument approaches with a conventional course deviation indicator. The results show good overall pilot acceptance of the display, a significant improvement in localizer tracking error, and no significant changes in glide-slope tracking error or pilot workload.

  6. Robust streamline tracing for the simulation of porous media flow on general triangular and quadrilateral grids

    NASA Astrophysics Data System (ADS)

    Matringe, Sébastien F.; Juanes, Ruben; Tchelepi, Hamdi A.

    2006-12-01

    Streamline methods for subsurface-flow simulation have received renewed attention as fast alternatives to traditional finite volume or finite element methods. Key aspects of streamline simulation are the accurate tracing of streamlines and the computation of travel time along individual streamlines. In this paper, we propose a new streamline tracing framework that enables the extension of streamline methods to unstructured grids composed of triangular or quadrilateral elements and populated with heterogeneous full-tensor coefficients. The proposed method is based on the mathematical framework of mixed finite element methods, which provides approximations of the velocity field that are especially suitable for streamline tracing. We identify and implement two classes of velocity spaces: the lowest-order Raviart-Thomas space (low-order tracing) and the Brezzi-Douglas-Marini space of order one (high-order tracing), both on triangles and quadrilaterals. We discuss the implementation of the streamline tracing method in detail, and we investigate the performance of the proposed tracing strategy by means of carefully designed test cases. We conclude that, for the same computational cost, high-order tracing is more accurate (smaller error in the time-of-flight) and robust (less sensitive to grid distortion) than low-order tracing.

  7. A general approach to develop reduced order models for simulation of solid oxide fuel cell stacks

    SciTech Connect

    Pan, Wenxiao; Bao, Jie; Lo, Chaomei; Lai, Canhai; Agarwal, Khushbu; Koeppel, Brian J.; Khaleel, Mohammad A.

    2013-06-15

    A reduced order modeling approach based on response surface techniques was developed for solid oxide fuel cell stacks. This approach creates a numerical model that can quickly compute desired performance variables of interest for a stack based on its input parameter set. The approach carefully samples the multidimensional design space based on the input parameter ranges, evaluates a detailed stack model at each of the sampled points, and performs regression for selected performance variables of interest to determine the responsive surfaces. After error analysis to ensure that sufficient accuracy is established for the response surfaces, they are then implemented in a calculator module for system-level studies. The benefit of this modeling approach is that it is sufficiently fast for integration with system modeling software and simulation of fuel cell-based power systems while still providing high fidelity information about the internal distributions of key variables. This paper describes the sampling, regression, sensitivity, error, and principal component analyses to identify the applicable methods for simulating a planar fuel cell stack.

  8. SciDAC - Center for Simulation of Wave Interactions with MHD -- General Atomics Support of ORNL Collaboration

    SciTech Connect

    Abla, G

    2012-11-09

    The Center for Simulation of Wave Interactions with Magnetohydrodynamics (SWIM) project is dedicated to conduct research on integrated multi-physics simulations. The Integrated Plasma Simulator (IPS) is a framework that was created by the SWIM team. It provides an integration infrastructure for loosely coupled component-based simulations by facilitating services for code execution coordination, computational resource management, data management, and inter-component communication. The IPS framework features improving resource utilization, implementing application-level fault tolerance, and support of the concurrent multi-tasking execution model. The General Atomics (GA) team worked closely with other team members on this contract, and conducted research in the areas of computational code monitoring, meta-data management, interactive visualization, and user interfaces. The original website to monitor SWIM activity was developed in the beginning of the project. Due to the amended requirements, the software was redesigned and a revision of the website was deployed into production in April of 2010. Throughout the duration of this project, the SWIM Monitoring Portal (http://swim.gat.com:8080/) has been a critical production tool for supporting the project's physics goals.

  9. Structuring energy supply and demand networks in a general equilibrium model to simulate global warming control strategies

    SciTech Connect

    Hamilton, S.; Veselka, T.D.; Cirillo, R.R.

    1991-01-01

    Global warming control strategies which mandate stringent caps on emissions of greenhouse forcing gases can substantially alter a country's demand, production, and imports of energy products. Although there is a large degree of uncertainty when attempting to estimate the potential impact of these strategies, insights into the problem can be acquired through computer model simulations. This paper presents one method of structuring a general equilibrium model, the ENergy and Power Evaluation Program/Global Climate Change (ENPEP/GCC), to simulate changes in a country's energy supply and demand balance in response to global warming control strategies. The equilibrium model presented in this study is based on the principle of decomposition, whereby a large complex problem is divided into a number of smaller submodules. Submodules simulate energy activities and conversion processes such as electricity production. These submodules are linked together to form an energy supply and demand network. Linkages identify energy and fuel flows among various activities. Since global warming control strategies can have wide reaching effects, a complex network was constructed. The network represents all energy production, conversion, transportation, distribution, and utilization activities. The structure of the network depicts interdependencies within and across economic sectors and was constructed such that energy prices and demand responses can be simulated. Global warming control alternatives represented in the network include: (1) conservation measures through increased efficiency; and (2) substitution of fuels that have high greenhouse gas emission rates with fuels that have lower emission rates. 6 refs., 4 figs., 4 tabs.

  10. A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations

    SciTech Connect

    Ouyang, G; Jandhyala, V; Champagne, N; Sharpe, R; Fasenfest, B J; Rockway, J D

    2004-12-14

    An Asymptotic Wave Expansion (AWE) technique is implemented into the EIGER computational electromagnetics code. The AWE fast frequency sweep is formed by separating the components of the integral equations by frequency dependence, then using this information to find a rational function approximation of the results. The standard AWE method is generalized to work for several integral equations, including the EFIE for conductors and the PMCHWT for dielectrics. The method is also expanded to work for two types of coupled circuit-EM problems as well as lumped load circuit elements. After a simple bisecting adaptive sweep algorithm is developed, dramatic speed improvements are seen for several example problems.

  11. TOUGH2: A general-purpose numerical simulator for multiphase fluid and heat flow

    SciTech Connect

    Pruess, K.

    1991-05-01

    TOUGH2 is a numerical simulation program for nonisothermal flows of multicomponent, multiphase fluids in porous and fractured media. The chief applications for which TOUGH2 is designed are in geothermal reservoir engineering, nuclear waste disposal, and unsaturated zone hydrology. A successor to the TOUGH program, TOUGH2 offers added capabilities and user features, including the flexibility to handle different fluid mixtures, facilities for processing of geometric data (computational grids), and an internal version control system to ensure referenceability of code applications. This report includes a detailed description of governing equations, program architecture, and user features. Enhancements in data inputs relative to TOUGH are described, and a number of sample problems are given to illustrate code applications. 46 refs., 29 figs., 12 tabs.

  12. Generalize aerodynamic coefficient table storage, checkout and interpolation for aircraft simulation

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Warner, N.

    1973-01-01

    The set of programs described has been used for rapidly introducing, checking out and very efficiently using aerodynamic tables in complex aircraft simulations on the IBM 360. The preprocessor program reads in tables with different names and dimensions and stores then on disc storage according to the specified dimensions. The tables are read in from IBM cards in a format which is convenient to reduce the data from the original graphs. During table processing, new auxiliary tables are generated which are required for table cataloging and for efficient interpolation. In addition, DIMENSION statements for the tables as well as READ statements are punched so that they may be used in other programs for readout of the data from disc without chance of programming errors. A quick data checking graphical output for all tables is provided in a separate program.

  13. Generalized finite element dynamic modelling and simulation for flexible robot manipulators

    NASA Astrophysics Data System (ADS)

    Zhou, Feng

    1993-01-01

    The finite element approach is used to model multi-link flexible robotic manipulators. The kinematic character of flexible manipulators is analyzed using body-fixed and link element attached coordinates. Dynamic equations for flexible robot manipulators are then derived. The position of each point on the link is expressed using a transformation matrix, and the kinetic and potential energy for each element is computed and summed over all the elements. The Lagrangian formulation is applied to set up the dynamic equations of the system. Computational simulations are performed on single- and two-link manipulators with and without torque to check the validity and correctness of the derived dynamic equations. The Runge-Kutta method is used to solve the dynamic equations for flexible manipulators on which all the joints are revolute.

  14. General Relativistic Hydrodynamic Simulation of Accretion Flow from a Stellar Tidal Disruption

    NASA Astrophysics Data System (ADS)

    Shiokawa, Hotaka; Krolik, Julian H.; Cheng, Roseanne M.; Piran, Tsvi; Noble, Scott C.

    2015-05-01

    We study how the matter dispersed when a supermassive black hole tidally disrupts a star joins an accretion flow. Combining a relativistic hydrodynamic simulation of the stellar disruption with a relativistic hydrodynamics simulation of the subsequent debris motion, we track the evolution of such a system until ≃ 80% of the stellar mass bound to the black hole has settled into an accretion flow. Shocks near the stellar pericenter and also near the apocenter of the most tightly bound debris dissipate orbital energy, but only enough to make its characteristic radius comparable to the semimajor axis of the most bound material, not the tidal radius as previously envisioned. The outer shocks are caused by post-Newtonian relativistic effects, both on the stellar orbit during its disruption and on the tidal forces. Accumulation of mass into the accretion flow is both non-monotonic and slow, requiring several to 10 times the orbital period of the most tightly bound tidal streams, while the inflow time for most of the mass may be comparable to or longer than the mass accumulation time. Deflection by shocks does, however, cause some mass to lose both angular momentum and energy, permitting it to move inward even before most of the mass is accumulated into the accretion flow. Although the accretion rate still rises sharply and then decays roughly as a power law, its maximum is ≃ 0.1× the previous expectation, and the timescale of the peak is ≃ 5× longer than previously predicted. The geometric mean of the black hole mass and stellar mass inferred from a measured event timescale is therefore ≃ 0.2× the value given by classical theory.

  15. CO adsorption over Pd nanoparticles: A general framework for IR simulations on nanoparticles

    NASA Astrophysics Data System (ADS)

    Zeinalipour-Yazdi, Constantinos D.; Willock, David J.; Thomas, Liam; Wilson, Karen; Lee, Adam F.

    2016-04-01

    CO vibrational spectra over catalytic nanoparticles under high coverages/pressures are discussed from a DFT perspective. Hybrid B3LYP and PBE DFT calculations of CO chemisorbed over Pd4 and Pd13 nanoclusters, and a 1.1 nm Pd38 nanoparticle, have been performed in order to simulate the corresponding coverage dependent infrared (IR) absorption spectra, and hence provide a quantitative foundation for the interpretation of experimental IR spectra of CO over Pd nanocatalysts. B3LYP simulated IR intensities are used to quantify site occupation numbers through comparison with experimental DRIFTS spectra, allowing an atomistic model of CO surface coverage to be created. DFT adsorption energetics for low CO coverage (θ → 0) suggest the CO binding strength follows the order hollow > bridge > linear, even for dispersion-corrected functionals for sub-nanometre Pd nanoclusters. For a Pd38 nanoparticle, hollow and bridge-bound are energetically similar (hollow ≈ bridge > atop). It is well known that this ordering has not been found at the high coverages used experimentally, wherein atop CO has a much higher population than observed over Pd(111), confirmed by our DRIFTS spectra for Pd nanoparticles supported on a KIT-6 silica, and hence site populations were calculated through a comparison of DFT and spectroscopic data. At high CO coverage (θ = 1), all three adsorbed CO species co-exist on Pd38, and their interdiffusion is thermally feasible at STP. Under such high surface coverages, DFT predicts that bridge-bound CO chains are thermodynamically stable and isoenergetic to an entirely hollow bound Pd/CO system. The Pd38 nanoparticle undergoes a linear (3.5%), isotropic expansion with increasing CO coverage, accompanied by 63 and 30 cm- 1 blue-shifts of hollow and linear bound CO respectively.

  16. Interannual tropical rainfall variability in general circulation model simulations associated with the atmospheric model intercomparison project

    SciTech Connect

    Sperber, K.R.; Palmer, T.N.

    1996-11-01

    The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979 - 88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations. A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany /National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall variability was also best reproduced. However, for all regions the skill was less than that of the ECMWF model. The relationships of the all-India and Sahel rainfall/SST teleconnections with horizontal resolution, convection scheme closure, and numerics have been evaluated. 64 refs., 13 figs., 3 tabs.

  17. Generalized fictitious methods for fluid-structure interactions: Analysis and simulations

    NASA Astrophysics Data System (ADS)

    Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em

    2013-07-01

    We present a new fictitious pressure method for fluid-structure interaction (FSI) problems in incompressible flow by generalizing the fictitious mass and damping methods we published previously in [1]. The fictitious pressure method involves modification of the fluid solver whereas the fictitious mass and damping methods modify the structure solver. We analyze all fictitious methods for simplified problems and obtain explicit expressions for the optimal reduction factor (convergence rate index) at the FSI interface [2]. This analysis also demonstrates an apparent similarity of fictitious methods to the FSI approach based on Robin boundary conditions, which have been found to be very effective in FSI problems. We implement all methods, including the semi-implicit Robin based coupling method, in the context of spectral element discretization, which is more sensitive to temporal instabilities than low-order methods. However, the methods we present here are simple and general, and hence applicable to FSI based on any other spatial discretization. In numerical tests, we verify the selection of optimal values for the fictitious parameters for simplified problems and for vortex-induced vibrations (VIV) even at zero mass ratio ("for-ever-resonance"). We also develop an empirical a posteriori analysis for complex geometries and apply it to 3D patient-specific flexible brain arteries with aneurysms for very large deformations. We demonstrate that the fictitious pressure method enhances stability and convergence, and is comparable or better in most cases to the Robin approach or the other fictitious methods.

  18. Generalized Cahn-Hilliard Navier-Stokes equations for numerical simulations of multicomponent immiscible flows

    NASA Astrophysics Data System (ADS)

    Li, Zhaorui; Livescu, Daniel

    2014-11-01

    By using the second-law of thermodynamics and the Onsager reciprocal method for irreversible processes, we have developed a set of physically consistent multicomponent compressible generalized Cahn-Hilliard Navier-Stokes (CGCHNS) equations from basic thermodynamics. The new equations can describe not only flows with pure miscible and pure immiscible materials but also complex flows in which mass diffusion and surface tension or Korteweg stresses effects may coexist. Furthermore, for the first time, the incompressible generalized Cahn-Hilliard Navier-Stokes (IGCHNS) equations are rigorously derived from the incompressible limit of the CGCHNS equations (as the infinite sound speed limit) and applied to the immiscible Rayleigh-Taylor instability problem. Extensive good agreements between numerical results and the linear stability theory (LST) predictions for the Rayleigh-Taylor instability are achieved for a wide range of wavenumber, surface tension, and viscosity values. The late-time results indicate that the IGCHNS equations can naturally capture complex interface topological changes including merging and breaking-up and are free of singularity problems.

  19. The r-process in black hole-neutron star mergers based on a fully general-relativistic simulation

    NASA Astrophysics Data System (ADS)

    Nishimura, N.; Wanajo, S.; Sekiguchi, Y.; Kiuchi, K.; Kyutoku, K.; Shibata, M.

    2016-01-01

    We investigate the black hole-neutron star binary merger in the contest of the r-process nucleosynthesis. Employing a hydrodynamical model simulated in the framework of full general relativity, we perform nuclear reaction network calculations. The extremely neutron-rich matter with the total mass 0.01 M⊙ is ejected, in which a strong r-process with fission cycling proceeds due to the high neutron number density. We discuss relevant astrophysical issues such as the origin of r-process elements as well as the r-process powered electromagnetic transients.

  20. Finite element for rotor/stator interactive forces in general engine dynamic simulation. Part 1: Development of bearing damper element

    NASA Technical Reports Server (NTRS)

    Adams, M. L.; Padovan, J.; Fertis, D. G.

    1980-01-01

    A general purpose squeeze-film damper interactive force element was developed, coded into a software package (module) and debugged. This software package was applied to nonliner dynamic analyses of some simple rotor systems. Results for pressure distributions show that the long bearing (end sealed) is a stronger bearing as compared to the short bearing as expected. Results of the nonlinear dynamic analysis, using a four degree of freedom simulation model, showed that the orbit of the rotating shaft increases nonlinearity to fill the bearing clearance as the unbalanced weight increases.

  1. Mars atmospheric dynamics as simulated by the NASA AMES General Circulation Model. I - The zonal-mean circulation

    NASA Astrophysics Data System (ADS)

    Haberle, R. M.; Pollack, J. B.; Barnes, J. R.; Zurek, R. W.; Leovy, C. B.; Murphy, J. R.; Lee, H.; Schaeffer, J.

    1993-02-01

    The characteristics of the zonal-mean circulation and how it responds to seasonal variations and dust loading are described. This circulation is the main momentum-containing component of the general circulation, and it plays a dominant role in the budgets of heat and momentum. It is shown that in many ways the zonal-mean circulation on Mars, at least as simulated by the model, is similar to that on earth, having Hadley and Ferrel cells and high-altitude jet streams. However, the Martian systems tend to be deeper, more intense, and much more variable with season. Furthermore, the radiative effects of suspended dust particles, even in small amounts, have a major influence on the general circulation.

  2. Generalized three-dimensional simulation of ferruled coupled-cavity traveling-wave-tube dispersion and impedance characteristics

    NASA Technical Reports Server (NTRS)

    Maruschek, Joseph W.; Kory, Carol L.; Wilson, Jeffrey D.

    1993-01-01

    The frequency-phase dispersion and Pierce on-axis interaction impedance of a ferruled, coupled-cavity, traveling-wave tube (TWT), slow-wave circuit were calculated using the three-dimensional simulation code Micro-SOS. The utilization of the code to reduce costly and time-consuming experimental cold tests is demonstrated by the accuracy achieved in calculating these parameters. A generalized input file was developed so that ferruled coupled-cavity TWT slow-wave circuits of arbitrary dimensions could be easily modeled. The practicality of the generalized input file was tested by applying it to the ferruled coupled-cavity slow-wave circuit of the Hughes Aircraft Company model 961HA TWT and by comparing the results with experimental results.

  3. 3D Simulations of the Early Mars Climate with a General Circulation Model

    NASA Technical Reports Server (NTRS)

    Forget, F.; Haberle, R. M.; Montmessin, F.; Cha, S.; Marcq, E.; Schaeffer, J.; Wanherdrick, Y.

    2003-01-01

    The environmental conditions that existed on Mars during the Noachian period are subject to debate in the community. In any case, there are compelling evidence that these conditions were different than what they became later in the amazonian and possibly the Hesperian periods. Indeed, most of the old cratered terrains are disected by valley networks (thought to have been carved by flowing liquid water), whereas younger surface are almost devoid of such valleys. In addition, there are evidence that the erosion rate was much higher during the early noachian than later. Flowing water is surprising on early Mars because the solar luminosity was significantly lower than today. Even with the thick atmosphere (up to several bars).To improve our understanding of the early Mars Climate, we have developed a 3D general circulation model similar to the one used on current Earth or Mars to study the details of the climate today. Our first objective is to answer the following questions : how is the Martian climate modified if 1) the surface pressure is increased up to several bars (our baseline: 2 bars) and 2) if the sun luminosity is decreased by 25 account the heat possibly released by impacts during short periods, although it may have played a role .For this purpose, we have coupled the Martian General Circulation model developed at LMD with a sophisticated correlated k distribution model developped at NASA Ames Research Center. It is a narrow band model which computes the radiative transfer at both solar and thermal wavelengths (from 0.3 to 250 microns).

  4. Simulator study of a pictorial display for general aviation instrument flight

    NASA Technical Reports Server (NTRS)

    Adams, J. J.

    1982-01-01

    A simulation study of a computer drawn pictorial display involved a flight task that included an en route segment, terminal area maneuvering, a final approach, a missed approach, and a hold. The pictorial display consists of the drawing of boxes which either move along the desired path or are fixed at designated way points. Two boxes may be shown at all times, one related to the active way point and the other related to the standby way point. Ground tracks and vertical profiles of the flights, time histories of the final approach, and comments were obtained from time pilots. The results demonstrate the accuracy and consistency with which the segments of the flight are executed. The pilots found that the display is easy to learn and to use; that it provides good situation awareness, and that it could improve the safety of flight. The small size of the display, the lack of numerical information on pitch, roll, and heading angles, and the lack of definition of the boundaries of the conventional glide slope and localizer areas were criticized.

  5. Generalized Metropolis acceptance criterion for hybrid non-equilibrium molecular dynamics—Monte Carlo simulations

    SciTech Connect

    Chen, Yunjie; Roux, Benoît

    2015-01-14

    A family of hybrid simulation methods that combines the advantages of Monte Carlo (MC) with the strengths of classical molecular dynamics (MD) consists in carrying out short non-equilibrium MD (neMD) trajectories to generate new configurations that are subsequently accepted or rejected via an MC process. In the simplest case where a deterministic dynamic propagator is used to generate the neMD trajectories, the familiar Metropolis acceptance criterion based on the change in the total energy ΔE, min[1,  exp( − βΔE)], guarantees that the hybrid algorithm will yield the equilibrium Boltzmann distribution. However, the functional form of the acceptance probability is more complex when the non-equilibrium switching process is generated via a non-deterministic stochastic dissipative propagator coupled to a heat bath. Here, we clarify the conditions under which the Metropolis criterion remains valid to rigorously yield a proper equilibrium Boltzmann distribution within hybrid neMD-MC algorithm.

  6. Nonorthogonal generalized hybrid Wannier functions for large-scale DFT simulations

    NASA Astrophysics Data System (ADS)

    Greco, Andrea; Freeland, John W.; Mostofi, Arash A.

    Semiconductor-based thin-films have applications in microelectronics,from transistors to nanocapacitors. Many properties of such devices strongly depend on the details of the interface between a metallic electrode and the thin-film semiconductor/insulator. Hybrid Wannier Functions (WFs), extended in the surface plane, but localized along the direction normal to the surface/interface, have been successfully used to explore the properties of such heterostructures layered along a given direction, and are a natural way to study systems that are at the same time a 2D conductor (in plane) and a 1D insulator (out of plane). Current state-of-the art implementations of Hybrid WFs rely on first performing a traditional cubic-scaling density-functional theory (DFT) calculation. This unfavourable scaling precludes the applicability of this method to the large length scales typically associated with processes in realistic structures. To overcome this limitation we extend the concept of Hybrid WFs to nonorthogonal orbitals that are directly optimized in situ in the electronic structure calculation. We implement this method in the ONETEP large-scale DFT code and we apply it to realistic heterostructure systems, showing it is able to provide plane-wave accuracy but at reduced computational cost. The authors would like to acknowledge support from the EPSRC, the Centre for Doctoral Training in Theory and Simulation of Materials, and Argonne National Laboratory.

  7. Strengthening of the Walker circulation under globalwarming in an aqua-planet general circulation model simulation

    NASA Astrophysics Data System (ADS)

    Li, Tim; Zhang, Lei; Murakami, Hiroyuki

    2015-11-01

    Most climate models project a weakening of theWalker circulation under global warming scenarios. It is argued, based on a global averaged moisture budget, that this weakening can be attributed to a slower rate of rainfall increase compared to that of moisture increase, which leads to a decrease in ascending motion. Through an idealized aqua-planet simulation in which a zonal wavenumber-1 SST distribution is prescribed along the equator, we find that the Walker circulation is strengthened under a uniform 2-K SST warming, even though the global mean rainfall-moisture relationship remains the same. Further diagnosis shows that the ascending branch of the Walker cell is enhanced in the upper troposphere but weakened in the lower troposphere. As a result, a "double-cell" circulation change pattern with a clockwise (anti-clockwise) circulation anomaly in the upper (lower) troposphere forms, and the upper tropospheric circulation change dominates. The mechanism for the formation of the "double cell" circulation pattern is attributed to a larger (smaller) rate of increase of diabatic heating than static stability in the upper (lower) troposphere. The result indicates that the future change of the Walker circulation cannot simply be interpreted based on a global mean moisture budget argument.

  8. A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    1998-01-01

    An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.

  9. Multiyear Simulations of the Martian Water Cycle with the Ames General Circulation Model

    NASA Technical Reports Server (NTRS)

    Haberle, R. M.; Schaeffer, J. R.; Nelli, S. M.; Murphy, J. R.

    2003-01-01

    Mars atmosphere is carbon dioxide dominated with non-negligible amounts of water vapor and suspended dust particles. The atmospheric dust plays an important role in the heating and cooling of the planet through absorption and emission of radiation. Small dust particles can potentially be carried to great altitudes and affect the temperatures there. Water vapor condensing onto the dust grains can affect the radiative properties of both, as well as their vertical extent. The condensation of water onto a dust grain will change the grain s fall speed and diminish the possibility of dust obtaining high altitudes. In this capacity, water becomes a controlling agent with regard to the vertical distribution of dust. Similarly, the atmosphere s water vapor holding capacity is affected by the amount of dust in the atmosphere. Dust is an excellent green house catalyst; it raises the temperature of the atmosphere, and thus, its water vapor holding capacity. There is, therefore, a potentially significant interplay between the Martian dust and water cycles. Previous research done using global, 3-D computer modeling to better understand the Martian atmosphere treat the dust and the water cycles as two separate and independent processes. The existing Ames numerical model will be employed to simulate the relationship between the Martian dust and water cycles by actually coupling the two cycles. Water will condense onto the dust, allowing the particle's radiative characteristics, fall speeds, and as a result, their vertical distribution to change. Data obtained from the Viking, Mars Pathfinder, and especially the Mars Global Surveyor missions will be used to determine the accuracy of the model results.

  10. Interfacing the Generalized Fluid System Simulation Program with the SINDA/G Thermal Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Palmiter, Christopher; Farmer, Jeffery; Lycans, Randall; Tiller, Bruce

    2000-01-01

    A general purpose, one dimensional fluid flow code has been interfaced with the thermal analysis program SINDA/G. The flow code, GFSSP, is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development was conducted in two phases. This paper describes the first (which allows for steady and quasi-steady - unsteady solid, steady fluid - conjugate heat transfer modeling). The second (full transient conjugate heat transfer modeling) phase of the interface development will be addressed in a later paper. Phase 1 development has been benchmarked to an analytical solution with excellent agreement. Additional test cases for each development phase demonstrate desired features of the interface. The results of the benchmark case, three additional test cases and a practical application are presented herein.

  11. Multidecadal Variability Simulated With an Atmospheric General Circulation Model Forced With Observed Sea Surface Temperature

    NASA Astrophysics Data System (ADS)

    Grosfeld, K.; Rimbu, N.; Lohmann, G.; Lunkeit, F.

    2002-12-01

    We investigate the response of an atmospheric general circulation model to observed sea surface temperature for the instrumental period 1856-2000. The model used is the {nderline P}ortable {nderline U}niversity {nderline M}odel of the {nderline A}tmosphere (PUMA) developed at the University of Hamburg for long-term climate studies. When the model is forced with global sea surface temperatures (SSTs) the model interdecadal variability is dominated by the Atlantic Interdecadal Mode (AIM) and its associated teleconnection patterns. The modeled interdecadal variability sea surface patterns are in good agreement with analysis of observational time series in an ensemble mode integration. Positive SST anomalies and a sea level pressure (SLP) dipole pattern dominate the North Atlantic while a strong positive anomaly in SLP is characteristic for the North Pacific Ocean. Although the observational database is short, investigations of the typical AIM patterns before and after the climate shift in the 1970's suggest an oscillatory multidecadal mode rather than a singular event for that period. Additional experiments with ''Atlantic only'' forcing depict strong sensitivities of the relative roles of Atlantic and Pacific SST data initiating variability at multidecadal time scales. Our results have implications for climate predictability on long time scales from observed SST data.

  12. A NURBS-based generalized finite element scheme for 3D simulation of heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Safdari, Masoud; Najafi, Ahmad R.; Sottos, Nancy R.; Geubelle, Philippe H.

    2016-08-01

    A 3D NURBS-based interface-enriched generalized finite element method (NIGFEM) is introduced to solve problems with complex discontinuous gradient fields observed in the analysis of heterogeneous materials. The method utilizes simple structured meshes of hexahedral elements that do not necessarily conform to the material interfaces in heterogeneous materials. By avoiding the creation of conforming meshes used in conventional FEM, the NIGFEM leads to significant simplification of the mesh generation process. To achieve an accurate solution in elements that are crossed by material interfaces, the NIGFEM utilizes Non-Uniform Rational B-Splines (NURBS) to enrich the solution field locally. The accuracy and convergence of the NIGFEM are tested by solving a benchmark problem. We observe that the NIGFEM preserves an optimal rate of convergence, and provides additional advantages including the accurate capture of the solution fields in the vicinity of material interfaces and the built-in capability for hierarchical mesh refinement. Finally, the use of the NIGFEM in the computational analysis of heterogeneous materials is discussed.

  13. Aref's chaotic orbits tracked by a general ellipsoid using 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Shui, Pei; Popinet, Stéphane; Govindarajan, Rama; Valluri, Prashant

    2015-11-01

    The motion of an ellipsoidal solid in an ideal fluid has been shown to be chaotic (Aref, 1993) under the limit of non-integrability of Kirchhoff's equations (Kozlov & Oniscenko, 1982). On the other hand, the particle could stop moving when the damping viscous force is strong enough. We present numerical evidence using our in-house immersed solid solver for 3D chaotic motion of a general ellipsoidal solid and suggest criteria for triggering such motion. Our immersed solid solver functions under the framework of the Gerris flow package of Popinet et al. (2003). This solver, the Gerris Immersed Solid Solver (GISS), resolves 6 degree-of-freedom motion of immersed solids with arbitrary geometry and number. We validate our results against the solution of Kirchhoff's equations. The study also shows that the translational/ rotational energy ratio plays the key role on the motion pattern, while the particle geometry and density ratio between the solid and fluid also have some influence on the chaotic behaviour. Along with several other benchmark cases for viscous flows, we propose prediction of chaotic Aref's orbits as a key benchmark test case for immersed boundary/solid solvers.

  14. Comparison of spectral surface albedos and their impact on the general circulation model simulated surface climate

    NASA Astrophysics Data System (ADS)

    Roesch, A.; Wild, M.; Pinker, R.; Ohmura, A.

    2002-07-01

    This study investigates the impact of spectrally resolved surface albedo on the total surface albedo. The neglect of albedo variation within the shortwave spectrum may lead to substantial errors as the atmospheric water greatly influences the spectral distribution of the incoming radiation. It is shown that ignoring the spectral dependence of the surface albedo will affect the predicted climate. The study reveals substantial changes in the climate over northern Africa when modifying the surface albedo of the Sahara deserts. Detailed information is given how the European Center/Hamburg General Circulation Model (ECHAM4) can be extended to include surface boundary conditions for both the visible and near-infrared incoming radiation. This comprises global climatologies for both the visible and near-infrared albedo for snow-free conditions, as well as the corresponding albedo values over snow, land-/sea ice and over snow covered forests. Comparisons between several available surface albedo climatologies and a newly compiled albedo data set show substantial scatter in estimated albedos. The largest albedo differences are found in snow covered forest regions as well as in arid and semi-arid terrains.

  15. El Nino-southern oscillation simulated in an MRI atmosphere-ocean coupled general circulation model

    SciTech Connect

    Nagai, T.; Tokioka, T.; Endoh, M.; Kitamura, Y. )

    1992-11-01

    A coupled atmosphere-ocean general circulation model (GCM) was time integrated for 30 years to study interannual variability in the tropics. The atmospheric component is a global GCM with 5 levels in the vertical and 4[degrees]latitude X 5[degrees] longitude grids in the horizontal including standard physical processes (e.g., interactive clouds). The oceanic component is a GCM for the Pacific with 19 levels in the vertical and 1[degrees]x 2.5[degrees] grids in the horizontal including seasonal varying solar radiation as forcing. The model succeeded in reproducing interannual variations that resemble the El Nino-Southern Oscillation (ENSO) with realistic seasonal variations in the atmospheric and oceanic fields. The model ENSO cycle has a time scale of approximately 5 years and the model El Nino (warm) events are locked roughly in phase to the seasonal cycle. The cold events, however, are less evident in comparison with the El Nino events. The time scale of the model ENSO cycle is determined by propagation time of signals from the central-eastern Pacific to the western Pacific and back to the eastern Pacific. Seasonal timing is also important in the ENSO time scale: wind anomalies in the central-eastern Pacific occur in summer and the atmosphere ocean coupling in the western Pacific operates efficiently in the first half of the year.

  16. Simulating the universe(s) II: phenomenology of cosmic bubble collisions in full general relativity

    NASA Astrophysics Data System (ADS)

    Wainwright, Carroll L.; Johnson, Matthew C.; Aguirre, Anthony; Peiris, Hiranya V.

    2014-10-01

    Observing the relics of collisions between bubble universes would provide direct evidence for the existence of an eternally inflating Multiverse; the non-observation of such events can also provide important constraints on inflationary physics. Realizing these prospects requires quantitative predictions for observables from the properties of the possible scalar field Lagrangians underlying eternal inflation. Building on previous work, we establish this connection in detail. We perform a fully relativistic numerical study of the phenomenology of bubble collisions in models with a single scalar field, computing the comoving curvature perturbation produced in a wide variety of models. We also construct a set of analytic predictions, allowing us to identify the phenomenologically relevant properties of the scalar field Lagrangian. The agreement between the analytic predictions and numerics in the relevant regions is excellent, and allows us to generalize our results beyond the models we adopt for the numerical studies. Specifically, the signature is completely determined by the spatial profile of the colliding bubble just before the collision, and the de Sitter invariant distance between the bubble centers. The analytic and numerical results support a power-law fit with an index 1< κ lesssim 2. For collisions between identical bubbles, we establish a lower-bound on the observed amplitude of collisions that is set by the present energy density in curvature.

  17. Simulating the universe(s) II: phenomenology of cosmic bubble collisions in full general relativity

    SciTech Connect

    Wainwright, Carroll L.; Aguirre, Anthony; Johnson, Matthew C.; Peiris, Hiranya V. E-mail: mjohnson@perimeterinstitute.ca E-mail: h.peiris@ucl.ac.uk

    2014-10-01

    Observing the relics of collisions between bubble universes would provide direct evidence for the existence of an eternally inflating Multiverse; the non-observation of such events can also provide important constraints on inflationary physics. Realizing these prospects requires quantitative predictions for observables from the properties of the possible scalar field Lagrangians underlying eternal inflation. Building on previous work, we establish this connection in detail. We perform a fully relativistic numerical study of the phenomenology of bubble collisions in models with a single scalar field, computing the comoving curvature perturbation produced in a wide variety of models. We also construct a set of analytic predictions, allowing us to identify the phenomenologically relevant properties of the scalar field Lagrangian. The agreement between the analytic predictions and numerics in the relevant regions is excellent, and allows us to generalize our results beyond the models we adopt for the numerical studies. Specifically, the signature is completely determined by the spatial profile of the colliding bubble just before the collision, and the de Sitter invariant distance between the bubble centers. The analytic and numerical results support a power-law fit with an index 1< κ ∼< 2. For collisions between identical bubbles, we establish a lower-bound on the observed amplitude of collisions that is set by the present energy density in curvature.

  18. The atmospheric chemistry general circulation model ECHAM5/MESSy1: consistent simulation of ozone from the surface to the mesosphere

    NASA Astrophysics Data System (ADS)

    Jöckel, P.; Tost, H.; Pozzer, A.; Brühl, C.; Buchholz, J.; Ganzeveld, L.; Hoor, P.; Kerkweg, A.; Lawrence, M. G.; Sander, R.; Steil, B.; Stiller, G.; Tanarhte, M.; Taraborrelli, D.; van Aardenne, J.; Lelieveld, J.

    2006-07-01

    The new Modular Earth Submodel System (MESSy) describes atmospheric chemistry and meteorological processes in a modular framework, following strict coding standards. It has been coupled to the ECHAM5 general circulation model, which has been slightly modified for this purpose. A 90-layer model version up to 0.01 hPa was used at T42 resolution (~2.8 latitude and longitude) to simulate the lower and middle atmosphere. The model meteorology has been tested to check the influence of the changes to ECHAM5 and the radiation interactions with the new representation of atmospheric composition. A Newtonian relaxation technique was applied in the tropospheric part of the domain to weakly nudge the model towards the analysed meteorology during the period 1998-2005. It is shown that the tropospheric wave forcing of the stratosphere in the model suffices to reproduce the Quasi-Biennial Oscillation and major stratospheric warming events leading e.g. to the vortex split over Antarctica in 2002. Characteristic features such as dehydration and denitrification caused by the sedimentation of polar stratospheric cloud particles and ozone depletion during winter and spring are simulated accurately, although ozone loss in the lower polar stratosphere is slightly underestimated. The model realistically simulates stratosphere-troposphere exchange processes as indicated by comparisons with satellite and in situ measurements. The evaluation of tropospheric chemistry presented here focuses on the distributions of ozone, hydroxyl radicals, carbon monoxide and reactive nitrogen compounds. In spite of minor shortcomings, mostly related to the relatively coarse T42 resolution and the neglect of interannual changes in biomass burning emissions, the main characteristics of the trace gas distributions are generally reproduced well. The MESSy submodels and the ECHAM5/MESSy1 model output are available through the internet on request.

  19. Interannual Variability of Martian Global Dust Storms: Simulations with a Low-Order Model of the General Circulation

    NASA Technical Reports Server (NTRS)

    Pankine, A. A.; Ingersoll, Andrew P.

    2002-01-01

    We present simulations of the interannual variability of martian global dust storms (GDSs) with a simplified low-order model (LOM) of the general circulation. The simplified model allows one to conduct computationally fast long-term simulations of the martian climate system. The LOM is constructed by Galerkin projection of a 2D (zonally averaged) general circulation model (GCM) onto a truncated set of basis functions. The resulting LOM consists of 12 coupled nonlinear ordinary differential equations describing atmospheric dynamics and dust transport within the Hadley cell. The forcing of the model is described by simplified physics based on Newtonian cooling and Rayleigh friction. The atmosphere and surface are coupled: atmospheric heating depends on the dustiness of the atmosphere, and the surface dust source depends on the strength of the atmospheric winds. Parameters of the model are tuned to fit the output of the NASA AMES GCM and the fit is generally very good. Interannual variability of GDSs is possible in the IBM, but only when stochastic forcing is added to the model. The stochastic forcing could be provided by transient weather systems or some surface process such as redistribution of the sand particles in storm generating zones on the surface. The results are sensitive to the value of the saltation threshold, which hints at a possible feedback between saltation threshold and dust storm activity. According to this hypothesis, erodable material builds up its a result of a local process, whose effect is to lower the saltation threshold until a GDS occurs. The saltation threshold adjusts its value so that dust storms are barely able to occur.

  20. Numerical simulation of 137Cs and (239,240)Pu concentrations by an ocean general circulation model.

    PubMed

    Tsumune, Daisuke; Aoyama, Michio; Hirose, Katsumi

    2003-01-01

    We simulated the spatial distributions and the temporal variations of 137Cs and (239,240)Pu concentrations in the ocean by using the ocean general circulation model which was developed by National Center of Atmospheric Research. These nuclides are introduced into seawaters from global fallout due to atmospheric nuclear weapons tests. The distribution of radioactive deposition on the world ocean is estimated from global precipitation data and observed values of annual deposition of radionuclides at the Meteorological Research Institute in Japan and several observed points in New Zealand. Radionuclides from global fallout have been transported by advection, diffusion and scavenging, and this concentration reduces by radioactive decay in the ocean. We verified the results of the model calculations by comparing simulated values of 137Cs and (239,240)Pu in seawater with the observed values included in the Historical Artificial Radionuclides in the HAM database, which has been constructed by the Meteorological Research Institute. The vertical distributions of the calculated 137Cs concentrations were in good agreement and are in good agreement with the observed profiles in the 1960s up to 250 m, in the 1970s up to 500 m, in the 1980s up to 750 m and in the 1990s up to 750 m. However, the calculated 137Cs concentrations were underestimated compared with the observed 137Cs at the deeper layer. This may suggest other transport processes of 137Cs to deep waters. The horizontal distributions of 137Cs concentrations in surface water could be simulated. A numerical tracer release experiment was performed to explain the horizontal distribution pattern. A maximum (239,240)Pu concentration layer occurs at an intermediate depth for both observed and calculated values, which is formed by particle scavenging. The horizontal distributions of the calculated (239,240)Pu concentrations in surface water could be simulated by considering the scavenging effect. PMID:12860090

  1. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  2. A nesting model for bias correction of variability at multiple time scales in general circulation model precipitation simulations

    NASA Astrophysics Data System (ADS)

    Johnson, Fiona; Sharma, Ashish

    2012-01-01

    Climate change impact assessments of water resources systems require simulations of precipitation and evaporation that exhibit distributional and persistence attributes similar to the historical record. Specifically, there is a need to ensure general circulation model (GCM) simulations of rainfall for the current climate exhibit low-frequency variability that is consistent with observed data. Inability to represent low-frequency variability in precipitation and flow leads to biased estimates of the security offered by water resources systems in a warmer climate. This paper presents a method to postprocess GCM precipitation simulations by imparting correct distributional and persistence attributes, resulting in sequences that are representative of observed records across a range of time scales. The proposed approach is named nesting bias correction (NBC), the rationale being to correct distributional and persistence bias from fine to progressively longer time scales. In the results presented here, distributional attributes have been represented by order 1 and 2 moments with persistence represented by lag 1 autocorrelation coefficients at monthly and annual time scales. The NBC method was applied to the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Mk3.5 and MIROC 3.2 hires rainfall simulations for Australia. It was found that the nesting method worked well to correct means, standard deviations, and lag 1 autocorrelations when the biases in the raw GCM outputs were not too large. While the bias correction improves the representation of distributional and persistence attributes at the time scales considered, there is room for representation of longer-term persistence by extending to time scales longer than a year.

  3. Simulations of Cyclic Voltammetry for Electric Double Layers in Asymmetric Electrolytes: A Generalized Modified Poisson-Nernst-Planck Model

    SciTech Connect

    Wang, Hainan; Thiele, Alexander; Pilon, Laurent

    2013-11-15

    This paper presents a generalized modified Poisson–Nernst–Planck (MPNP) model derived from first principles based on excess chemical potential and Langmuir activity coefficient to simulate electric double-layer dynamics in asymmetric electrolytes. The model accounts simultaneously for (1) asymmetric electrolytes with (2) multiple ion species, (3) finite ion sizes, and (4) Stern and diffuse layers along with Ohmic potential drop in the electrode. It was used to simulate cyclic voltammetry (CV) measurements for binary asymmetric electrolytes. The results demonstrated that the current density increased significantly with decreasing ion diameter and/or increasing valency |zi| of either ion species. By contrast, the ion diffusion coefficients affected the CV curves and capacitance only at large scan rates. Dimensional analysis was also performed, and 11 dimensionless numbers were identified to govern the CV measurements of the electric double layer in binary asymmetric electrolytes between two identical planar electrodes of finite thickness. A self-similar behavior was identified for the electric double-layer integral capacitance estimated from CV measurement simulations. Two regimes were identified by comparing the half cycle period τCV and the “RC time scale” τRC corresponding to the characteristic time of ions’ electrodiffusion. For τRC ← τCV, quasi-equilibrium conditions prevailed and the capacitance was diffusion-independent while for τRC → τCV, the capacitance was diffusion-limited. The effect of the electrode was captured by the dimensionless electrode electrical conductivity representing the ratio of characteristic times associated with charge transport in the electrolyte and that in the electrode. The model developed here will be useful for simulating and designing various practical electrochemical, colloidal, and biological systems for a wide range of applications.

  4. Work Practice Simulation of Complex Human-Automation Systems in Safety Critical Situations: The Brahms Generalized berlingen Model

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Linde, Charlotte; Seah, Chin; Shafto, Michael

    2013-01-01

    The transition from the current air traffic system to the next generation air traffic system will require the introduction of new automated systems, including transferring some functions from air traffic controllers to on­-board automation. This report describes a new design verification and validation (V&V) methodology for assessing aviation safety. The approach involves a detailed computer simulation of work practices that includes people interacting with flight-critical systems. The research is part of an effort to develop new modeling and verification methodologies that can assess the safety of flight-critical systems, system configurations, and operational concepts. The 2002 Ueberlingen mid-air collision was chosen for analysis and modeling because one of the main causes of the accident was one crew's response to a conflict between the instructions of the air traffic controller and the instructions of TCAS, an automated Traffic Alert and Collision Avoidance System on-board warning system. It thus furnishes an example of the problem of authority versus autonomy. It provides a starting point for exploring authority/autonomy conflict in the larger system of organization, tools, and practices in which the participants' moment-by-moment actions take place. We have developed a general air traffic system model (not a specific simulation of Überlingen events), called the Brahms Generalized Ueberlingen Model (Brahms-GUeM). Brahms is a multi-agent simulation system that models people, tools, facilities/vehicles, and geography to simulate the current air transportation system as a collection of distributed, interactive subsystems (e.g., airports, air-traffic control towers and personnel, aircraft, automated flight systems and air-traffic tools, instruments, crew). Brahms-GUeM can be configured in different ways, called scenarios, such that anomalous events that contributed to the Überlingen accident can be modeled as functioning according to requirements or in an

  5. How to Hold a Model Legislature: A Simulation of the Georgia General Assembly, Teacher's Manual [And] The Model Legislature, Student's Kit.

    ERIC Educational Resources Information Center

    Jackson, Edwin L.

    The student's kit and teacher's manual provide a framework for secondary students to simulate the functionings of Georgia's General Assembly. Objectives of the simulation are to help students: (1) experience the forces and conflicts involved in lawmaking, (2) learn about the role of legislators, (3) understand and discuss issues facing citizens,…

  6. Use of Generalized Fluid System Simulation Program (GFSSP) for Teaching and Performing Senior Design Projects at the Educational Institutions

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; Hedayat, A.

    2015-01-01

    This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects.

  7. Experimental simulation of the accommodation in general-type triple junctions during the deformation of tricrystals and nanocrystalline structures

    NASA Astrophysics Data System (ADS)

    Sisanbaev, A. V.; Demchenko, A. A.; Demchenko, M. V.

    2013-10-01

    The development of the accommodation processes in general-type triple junctions is studied during the deformation of tricrystals and a system of nanocrystals with various grain sizes. Molecular dynamics simulation of the deformation of a system of nanocrystals with a grain size of ˜100 nm results in a set of accommodation processes that is identical to that in the tricrystals, namely, the nucleation of a dislocation "fold" at grain-boundary kinks and in triple junctions, the formation of subgrains near grain boundaries, grain fragmentation, and propeller-like grain-boundary migration near triple junctions. The appearance of nanograin rotation in a system of nanograins with a grain size of ˜10 nm is shown.

  8. Modeling of Compressible Flow with Friction and Heat Transfer Using the Generalized Fluid System Simulation Program (GFSSP)

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak; Majumdar, Alok

    2007-01-01

    The present paper describes the verification and validation of a quasi one-dimensional pressure based finite volume algorithm, implemented in Generalized Fluid System Simulation Program (GFSSP), for predicting compressible flow with friction, heat transfer and area change. The numerical predictions were compared with two classical solutions of compressible flow, i.e. Fanno and Rayleigh flow. Fanno flow provides an analytical solution of compressible flow in a long slender pipe where incoming subsonic flow can be choked due to friction. On the other hand, Raleigh flow provides analytical solution of frictionless compressible flow with heat transfer where incoming subsonic flow can be choked at the outlet boundary with heat addition to the control volume. Nonuniform grid distribution improves the accuracy of numerical prediction. A benchmark numerical solution of compressible flow in a converging-diverging nozzle with friction and heat transfer has been developed to verify GFSSP's numerical predictions. The numerical predictions compare favorably in all cases.

  9. Binding constants of membrane-anchored receptors and ligands: A general theory corroborated by Monte Carlo simulations.

    PubMed

    Xu, Guang-Kui; Hu, Jinglei; Lipowsky, Reinhard; Weikl, Thomas R

    2015-12-28

    Adhesion processes of biological membranes that enclose cells and cellular organelles are essential for immune responses, tissue formation, and signaling. These processes depend sensitively on the binding constant K2D of the membrane-anchored receptor and ligand proteins that mediate adhesion, which is difficult to measure in the "two-dimensional" (2D) membrane environment of the proteins. An important problem therefore is to relate K2D to the binding constant K3D of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in three dimensions (3D). In this article, we present a general theory for the binding constants K2D and K3D of rather stiff proteins whose main degrees of freedom are translation and rotation, along membranes and around anchor points "in 2D," or unconstrained "in 3D." The theory generalizes previous results by describing how K2D depends both on the average separation and thermal nanoscale roughness of the apposing membranes, and on the length and anchoring flexibility of the receptors and ligands. Our theoretical results for the ratio K2D/K3D of the binding constants agree with detailed results from Monte Carlo simulations without any data fitting, which indicates that the theory captures the essential features of the "dimensionality reduction" due to membrane anchoring. In our Monte Carlo simulations, we consider a novel coarse-grained model of biomembrane adhesion in which the membranes are represented as discretized elastic surfaces, and the receptors and ligands as anchored molecules that diffuse continuously along the membranes and rotate at their anchor points. PMID:26723621

  10. A two-scale generalized finite element method for fatigue crack propagation simulations utilizing a fixed, coarse hexahedral mesh

    NASA Astrophysics Data System (ADS)

    O'Hara, P.; Hollkamp, J.; Duarte, C. A.; Eason, T.

    2016-01-01

    This paper presents a two-scale extension of the generalized finite element method (GFEM) which allows for static fracture analyses as well as fatigue crack propagation simulations on fixed, coarse hexahedral meshes. The approach is based on the use of specifically-tailored enrichment functions computed on-the-fly through the use of a fine-scale boundary value problem (BVP) defined in the neighborhood of existing mechanically-short cracks. The fine-scale BVP utilizes tetrahedral elements, and thus offers the potential for the use of a highly adapted fine-scale mesh in the regions of crack fronts capable of generating accurate enrichment functions for use in the coarse-scale hexahedral model. In this manner, automated hp-adaptivity which can be used for accurate fracture analyses, is now available for use on coarse, uniform hexahedral meshes without the requirements of irregular meshes and constrained approximations. The two-scale GFEM approach is verified and compared against alternative approaches for static fracture analyses, as well as mixed-mode fatigue crack propagation simulations. The numerical examples demonstrate the ability of the proposed approach to deliver accurate results even in scenarios involving multiple discontinuities or sharp kinks within a single computational element. The proposed approach is also applied to a representative panel model similar in design and complexity to that which may be used in the aerospace community.

  11. The time-course of alcohol impairment of general aviation pilot performance in a Frasca 141 simulator.

    PubMed

    Morrow, D; Yesavage, J; Leirer, V; Dolhert, N; Taylor, J; Tinklenberg, J

    1993-08-01

    This study examined the time-course of alcohol impairment of general aviation pilot simulator performance. We tested 14 young (mean age 25.8 years) and 14 older (mean age 37.9 years) pilots in a Frasca 141 simulator during alcohol and placebo conditions. In the alcohol condition, pilots drank alcohol and were tested after reaching 0.10% BAL, and then 2, 4, 8, 24, and 48 h after they had stopped drinking. They were tested at the same times in the placebo condition. Alcohol impaired overall performance. Alcohol impairment also depended on the order in which subjects participated in the alcohol and placebo sessions, with larger decrements for the alcohol-placebo order than for the opposite order. To examine the influence of alcohol independent of session order effects, we compared performance in the first alcohol session with performance in the first placebo session. This analysis showed that alcohol significantly reduced mean performance in the alcohol condition at 0.10% BAL and at 2 h. In addition, alcohol increased variability in performance in the alcohol session from 0.10% BAL to 8 h, suggesting that some subjects were more susceptible to alcohol than others. Older pilots tended to perform some radio communication tasks less accurately than younger pilots. PMID:8368982

  12. Debris Flow Simulation using FLO-2D on the 2004 Landslide Area of Real, General Nakar, and Infanta, Philippines

    NASA Astrophysics Data System (ADS)

    Llanes, F.; dela Resma, M.; Ferrer, P.; Realino, V.; Aquino, D. T.; Eco, R. C.; Lagmay, A.

    2013-12-01

    From November 14 to December 3, 2004, Luzon Island was ravaged by 4 successive typhoons: Typhoon Mufia, Tropical Storm Merbok, Tropical Depression Winnie, and Super Typhoon Nanmadol. Tropical Depression Winnie was the most destructive of the four when it triggered landslides on November 29 that devastated the municipalities of Infanta, General Nakar, and Real in Quezon Province, southeast Luzon. Winnie formed east of Central Luzon on November 27 before it moved west-northwestward over southeastern Luzon on November 29. A total of 1,068 lives were lost and more than USD 170 million worth of damages to crops and infrastructure were incurred from the landslides triggered by Typhoon Winnie on November 29 and the flooding caused by the 4 typhoons. FLO-2D, a flood routing software for generating flood and debris flow hazard maps, was utilized to simulate the debris flows that could potentially affect the study area. Based from the rainfall intensity-duration-frequency analysis, the cumulative rainfall from typhoon Winnie on November 29 which was approximately 342 mm over a 9-hour period was classified within a 100-year return period. The Infanta station of the Philippine Atmospheric Geophysical and Astronomical Services Administration (PAGASA) was no longer able to measure the amount of rainfall after this period because the rain gauge in that station was washed away by floods. Rainfall data with a 100-year return period was simulated over the watersheds delineated from a SAR-derived digital elevation model. The resulting debris flow hazard map was compared with results from field investigation and previous studies made on the landslide event. The simulation identified 22 barangays (villages) with a total of 45,155 people at risk of turbulent flow and flooding.

  13. Generalized Fluid System Simulation Program, Version 5.0-Educational. Supplemental Information for NASA/TM-2011-216470. Supplement

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.

    2011-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the point, drag and click method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids and 21 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 12 demonstrated example problems. This supplement gives the input and output data files for the examples.

  14. Fused hard-sphere chain molecules: Comparison between Monte Carlo simulation for the bulk pressure and generalized Flory theories

    SciTech Connect

    Costa, L.A.; Zhou, Y.; Hall, C.K.; Carra, S.

    1995-04-15

    We report Monte Carlo simulation results for the bulk pressure of fused-hard-sphere (FHS) chain fluids with bond-length-to-bead-diameter ratios {approx} 0.4 at chain lengths {ital n}=4, 8 and 16. We also report density profiles for FHS chain fluids at a hard wall. The results for the compressibility factor are compared to results from extensions of the Generalized Flory (GF) and Generalized Flory Dimer (GFD) theories proposed by Yethiraj {ital et} {ital al}. and by us. Our new GF theory, GF-AB, significantly improves the prediction of the bulk pressure of fused-hard-sphere chains over the GFD theories proposed by Yethiraj {ital et} {ital al}. and by us although the GFD theories give slightly better low-density results. The GFD-A theory, the GFD-B theory and the new theories (GF-AB, GFD-AB, and GFD-AC) satisfy the exact zero-bonding-length limit. All theories considered recover the GF or GFD theories at the tangent hard-sphere chain limit.

  15. The variability, structure and energy conversion of the northern hemisphere traveling waves simulated in a Mars general circulation model

    NASA Astrophysics Data System (ADS)

    Wang, Huiqun; Toigo, Anthony D.

    2016-06-01

    Investigations of the variability, structure and energetics of the m = 1-3 traveling waves in the northern hemisphere of Mars are conducted with the MarsWRF general circulation model. Using a simple, annually repeatable dust scenario, the model reproduces many general characteristics of the observed traveling waves. The simulated m = 1 and m = 3 traveling waves show large differences in terms of their structures and energetics. For each representative wave mode, the geopotential signature maximizes at a higher altitude than the temperature signature, and the wave energetics suggests a mixed baroclinic-barotropic nature. There is a large contrast in wave energetics between the near-surface and higher altitudes, as well as between the lower latitudes and higher latitudes at high altitudes. Both barotropic and baroclinic conversions can act as either sources or sinks of eddy kinetic energy. Band-pass filtered transient eddies exhibit strong zonal variations in eddy kinetic energy and various energy transfer terms. Transient eddies are mainly interacting with the time mean flow. However, there appear to be non-negligible wave-wave interactions associated with wave mode transitions. These interactions include those between traveling waves and thermal tides and those among traveling waves.

  16. Seasonal Simulations of the Planetary Boundary Layer and Boundary-Layer Stratocumulus Clouds with a General Circulation Model.

    NASA Astrophysics Data System (ADS)

    Randall, David A.; Abeles, James A.; Corsetti, Thomas G.

    1985-04-01

    The UCLA general circulation model (GCM) has been used to simulate the seasonally varying planetary boundary layer (PBL), as well as boundary-layer stratus and stratocumulus clouds. The PBL depth is a prognostic variable of the GCM, incorporated through the use of a vertical coordinate system in which the PBL is identified with the lowest model layer.Stratocumulus clouds are assumed to occur whenever the upper portion of the PBL becomes saturated, provided that the cloud-top entrainment instability does not occur. As indicated by Arakawa and Schubert, cumulus clouds are assumed to originate at the PBL top, and tend to make the PBL shallow by drawing on its mass.Results are presented from a three-year simulation, starting from a 31 December initial condition obtained from an earlier run with a different version of the model. The simulated seasonally varying climates of the boundary layer and free troposphere are realistic. The observed geographical and seasonal variations of stratocumulus cloudiness are fairly well simulated. The simulation of the stratocumulus clouds associated with wintertime cold-air outbreaks is particularly realistic. Examples are given of individual events. The positions of the subtropical marine stratocumulus regimes are realistically simulated, although their observed frequency of occurrence is seriously underpredicted. The observed summertime abundance of Arctic stratus clouds is also underpredicted.In the GCM results, the layer cloud instability appears to limit the extent of the marine subtropical stratocumulus regimes. The instability also frequently occurs in association with cumulus convection over land.Cumulus convection acts as a very significant sink of PBL mass throughout the tropics, and over the midlatitude continents in summer.Three experiments have been performed to investigate the sensitivity of the GCM results to aspects of the PBL and stratocumulus parameterizations. For all three experiments, the model was started from 1

  17. Streamflow changes in the Sierra Nevada, California, simulated using a statistically downscaled general circulation model scenario of climate change

    USGS Publications Warehouse

    Wilby, Robert L.; Dettinger, Michael D.

    2000-01-01

    Simulations of future climate using general circulation models (GCMs) suggest that rising concentrations of greenhouse gases may have significant consequences for the global climate. Of less certainty is the extent to which regional scale (i.e., sub-GCM grid) environmental processes will be affected. In this chapter, a range of downscaling techniques are critiqued. Then a relatively simple (yet robust) statistical downscaling technique and its use in the modelling of future runoff scenarios for three river basins in the Sierra Nevada, California, is described. This region was selected because GCM experiments driven by combined greenhouse-gas and sulphate-aerosol forcings consistently show major changes in the hydro-climate of the southwest United States by the end of the 21st century. The regression-based downscaling method was used to simulate daily rainfall and temperature series for streamflow modelling in three Californian river basins under current-and future-climate conditions. The downscaling involved just three predictor variables (specific humidity, zonal velocity component of airflow, and 500 hPa geopotential heights) supplied by the U.K. Meteorological Office couple ocean-atmosphere model (HadCM2) for the grid point nearest the target basins. When evaluated using independent data, the model showed reasonable skill at reproducing observed area-average precipitation, temperature, and concomitant streamflow variations. Overall, the downscaled data resulted in slight underestimates of mean annual streamflow due to underestimates of precipitation in spring and positive temperature biases in winter. Differences in the skill of simulated streamflows amongst the three basins were attributed to the smoothing effects of snowpack on streamflow responses to climate forcing. The Merced and American River basins drain the western, windward slope of the Sierra Nevada and are snowmelt dominated, whereas the Carson River drains the eastern, leeward slope and is a mix of

  18. Pore Topology Method: A General and Fast Pore-Scale Modeling Approach to Simulate Fluid Flow in Porous Media

    NASA Astrophysics Data System (ADS)

    Riasi, M. S.; Huang, G.; Montemagno, C.; Yeghiazarian, L.

    2014-12-01

    Micro-scale modeling of multiphase flow in porous media is critical to characterize porous materials. Several modeling techniques have been implemented to date, but none can be used as a general strategy for all porous media applications due to challenges presented by non-smooth high-curvature and deformable solid surfaces, and by a wide range of pore sizes and porosities. Finite approaches like the finite volume method require a high quality, problem-dependent mesh, while particle-based approaches like the lattice Boltzmann require too many particles to achieve a stable meaningful solution. Both come at a large computational cost. Other methods such as pore network modeling (PNM) have been developed to accelerate the solution process by simplifying the solution domain, but so far a unique and straightforward methodology to implement PNM is lacking. Pore topology method (PTM) is a new topologically consistent approach developed to simulate multiphase flow in porous media. The core of PTM is to reduce the complexity of the 3-D void space geometry by working with its medial surface as the solution domain. Medial surface is capable of capturing all the corners and surface curvatures in a porous structure, and therefore provides a topologically consistent representative geometry for porous structure. Despite the simplicity and low computational cost, PTM provides a fast and straightforward approach for micro-scale modeling of fluid flow in all types of porous media irrespective of their porosity and pore size distribution. In our previous work, we developed a non-iterative fast medial surface finder algorithm to determine a voxel-wide medial surface of the void space of porous media as well as a set of simple rules to determine the capillary pressure-saturation curves for a porous system assuming quasi-static two-phase flow with a planar w-nw interface. Our simulation results for a highly porous fibrous material and polygonal capillary tubes were in excellent agreement

  19. Use of Generalized Fluid System Simulation Program (GFSSP) for Teaching and Performing Senior Design Projects at the Educational Institutions

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; Hedayat, A.

    2015-01-01

    This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects. The need for a generalized computer program for thermofluid analysis in a flow network has been felt for a long time in aerospace industries. Designers of thermofluid systems often need to know pressures, temperatures, flow rates, concentrations, and heat transfer rates at different parts of a flow circuit for steady state or transient conditions. Such applications occur in propulsion systems for tank pressurization, internal flow analysis of rocket engine turbopumps, chilldown of cryogenic tanks and transfer lines, and many other applications of gas-liquid systems involving fluid transients and conjugate heat and mass transfer. Computer resource requirements to perform time-dependent, three-dimensional Navier-Stokes computational fluid dynamic (CFD) analysis of such systems are prohibitive and therefore are not practical. Available commercial codes are generally suitable for steady state, single-phase incompressible flow. Because of the proprietary nature of such codes, it is not possible to extend their capability to satisfy the above-mentioned needs. Therefore, the Generalized Fluid System Simulation Program (GFSSP1) has been developed at NASA

  20. Axisymmetric collapse simulations of rotating massive stellar cores in full general relativity: Numerical study for prompt black hole formation

    SciTech Connect

    Sekiguchi, Yu-ichirou; Shibata, Masaru

    2005-04-15

    We perform axisymmetric simulations for gravitational collapse of a massive iron core to a black hole in full general relativity. The iron cores are modeled by {gamma}=4/3 equilibrium polytrope for simplicity. The hydrodynamic equations are solved using a high-resolution shock-capturing scheme with a parametric equation of state. The Cartoon method is adopted for solving the Einstein equations. Simulations are performed for a wide variety of initial conditions changing the mass ({approx_equal}2.0-3.0M{sub {center_dot}}), the angular momentum, the rotational velocity profile of the core, and the parameters of the equations of state which are chosen so that the maximum mass of the cold spherical polytrope is {approx_equal}1.6M{sub {center_dot}}. Then, the criterion for the prompt black hole formation is clarified in terms of the mass and the angular momentum for several rotational velocity profile of the core and equations of state. It is found that (i) with the increase of the thermal energy generated by shocks, the threshold mass for the prompt black hole formation is increased by 20-40%, (ii) the rotational centrifugal force increases the threshold mass by < or approx. 25%, (iii) with the increase of the degree of differential rotation, the threshold mass is also increased, and (iv) the amplification factors shown in the results (i)-(iii) depend sensitively on the equation of state. We also find that the collapse dynamics and the structure of the shock formed at the bounce depend strongly on the stiffness of the adopted equation of state. In particular, as a new feature, a strong bipolar explosion is observed for the collapse of rapidly rotating iron cores with an equation of state which is stiff in subnuclear density and soft in supranuclear density. Gravitational waves are computed in terms of a quadrupole formula. It is also found that the waveform depends sensitively on the equations of state.

  1. Formation of Overheated Regions and Truncated Disks around Black Holes: Three-dimensional General Relativistic Radiation-magnetohydrodynamics Simulations

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroyuki R.; Ohsuga, Ken; Kawashima, Tomohisa; Sekiguchi, Yuichiro

    2016-07-01

    Using three-dimensional general relativistic radiation-magnetohydrodynamics simulations of accretion flows around stellar mass black holes, we report that the relatively cold disk (≳ {10}7 {{K}}) is truncated near the black hole. Hot and less dense regions, of which the gas temperature is ≳ {10}9 {{K}} and more than 10 times higher than the radiation temperature (overheated regions), appear within the truncation radius. The overheated regions also appear above as well as below the disk, sandwiching the cold disk, leading to the effective Compton upscattering. The truncation radius is ∼ 30{r}{{g}} for \\dot{M}∼ {L}{{Edd}}/{c}2, where {r}{{g}},\\dot{M},{L}{Edd},c are the gravitational radius, mass accretion rate, Eddington luminosity, and light speed, respectively. Our results are consistent with observations of a very high state, whereby the truncated disk is thought to be embedded in the hot rarefied regions. The truncation radius shifts inward to ∼ 10{r}{{g}} with increasing mass accretion rate \\dot{M}∼ 100{L}{{Edd}}/{c}2, which is very close to an innermost stable circular orbit. This model corresponds to the slim disk state observed in ultraluminous X-ray sources. Although the overheated regions shrink if the Compton cooling effectively reduces the gas temperature, the sandwich structure does not disappear at the range of \\dot{M}≲ 100{L}{{Edd}}/{c}2. Our simulations also reveal that the gas temperature in the overheated regions depends on black hole spin, which would be due to efficient energy transport from black hole to disks through the Poynting flux, resulting in gas heating.

  2. Formation of Overheated Regions and Truncated Disks around Black Holes: Three-dimensional General Relativistic Radiation-magnetohydrodynamics Simulations

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroyuki R.; Ohsuga, Ken; Kawashima, Tomohisa; Sekiguchi, Yuichiro

    2016-07-01

    Using three-dimensional general relativistic radiation-magnetohydrodynamics simulations of accretion flows around stellar mass black holes, we report that the relatively cold disk (≳ {10}7 {{K}}) is truncated near the black hole. Hot and less dense regions, of which the gas temperature is ≳ {10}9 {{K}} and more than 10 times higher than the radiation temperature (overheated regions), appear within the truncation radius. The overheated regions also appear above as well as below the disk, sandwiching the cold disk, leading to the effective Compton upscattering. The truncation radius is ˜ 30{r}{{g}} for \\dot{M}˜ {L}{{Edd}}/{c}2, where {r}{{g}},\\dot{M},{L}{Edd},c are the gravitational radius, mass accretion rate, Eddington luminosity, and light speed, respectively. Our results are consistent with observations of a very high state, whereby the truncated disk is thought to be embedded in the hot rarefied regions. The truncation radius shifts inward to ˜ 10{r}{{g}} with increasing mass accretion rate \\dot{M}˜ 100{L}{{Edd}}/{c}2, which is very close to an innermost stable circular orbit. This model corresponds to the slim disk state observed in ultraluminous X-ray sources. Although the overheated regions shrink if the Compton cooling effectively reduces the gas temperature, the sandwich structure does not disappear at the range of \\dot{M}≲ 100{L}{{Edd}}/{c}2. Our simulations also reveal that the gas temperature in the overheated regions depends on black hole spin, which would be due to efficient energy transport from black hole to disks through the Poynting flux, resulting in gas heating.

  3. Simulation of West African monsoon circulation in four atmospheric general circulation models forced by prescribed sea surface temperature

    NASA Astrophysics Data System (ADS)

    Moron, Vincent; Philippon, Nathalie; Fontaine, Bernard

    2004-12-01

    The mean evolution of the West African monsoon (WAM) circulation and its interannual variability have been studied using an ensemble of 21 simulations (common period 1961-1994) performed with four different atmospheric general circulation models (AGCMs) (European Center/Hamburg (ECHAM) 3, ECHAM 4, Action de Recherche Petite Echelle Grande Echelle (ARPEGE), and Goddard Institute for Space Studies (GISS)) and forced by the same observed sea surface temperature (SST) data set. The results have been compared with European Centre for Medium-Range Weather Forecasts reanalyses (ERA-40). The climatological means of WAM winds for the AGCMs are similar to the ERA-40 ones. However, the AGCMs tend to underestimate the southern wind component at low levels around 10°N compared to the ERA-40. The simulated Tropical Easterly Jet (TEJ) is usually shifted northward and also too weak for ECHAM 3 and ECHAM 4 compared to ERA-40. The interannual variability of an atmospheric WAM index (WAMI) is quite successfully reproduced (the correlations between the mean ensemble of each AGCM and ERA-40 time series over 1961-1994 range between 0.51 and 0.64). In particular, the four AGCMs reproduce quite well the mean teleconnection structure with El Niño-Southern Oscillation, i.e., a strong (weak) monsoon during La Niña (El Niño) events, even if the largest absolute correlations between WAMI and SST in the eastern and central equatorial Pacific are weaker than in ERA-40. On a yearly basis, WAMI is more predictable and skillful during the cold ENSO years than during the warm ENSO ones. The unskillful warm ENSO events are associated with a significant cooling over the equatorial Atlantic and Western Pacific Ocean and a significant warming in the tropical Indian Ocean.

  4. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-01-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present

  5. One-step leapfrog ADI-FDTD method for simulating electromagnetic wave propagation in general dispersive media.

    PubMed

    Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David

    2013-09-01

    The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods. PMID:24103929

  6. Influence of Mental Workload on the Performance of Anesthesiologists during Induction of General Anesthesia: A Patient Simulator Study

    PubMed Central

    Sato, Hitoshi; Miyashita, Tetsuya; Kawakami, Hiromasa; Nagamine, Yusuke; Takaki, Shunsuke; Goto, Takahisa

    2016-01-01

    The aim of this study was to reveal the effect of anesthesiologist's mental workload during induction of general anesthesia. Twenty-two participants were categorized into anesthesiology residents (RA group, n = 13) and board certified anesthesiologists (CA group, n = 9). Subjects participated in three simulated scenarios (scenario A: baseline, scenario B: simple addition tasks, and scenario C: combination of simple addition tasks and treatment of unexpected arrhythmia). We used simple two-digit integer additions every 5 seconds as a secondary task. Four kinds of key actions were also evaluated in each scenario. In scenario C, the correct answer rate was significantly higher in the CA versus the RA group (RA: 0.370 ± 0.050 versus CA: 0.736 ± 0.051, p < 0.01, 95% CI −0.518 to −0.215) as was the score of key actions (RA: 2.7 ± 1.3 versus CA: 4.0 ± 0.00, p = 0.005). In a serious clinical situation, anesthesiologists might not be able to adequately perform both the primary and secondary tasks. This tendency is more apparent in young anesthesiologists. PMID:27148548

  7. New insights into the generalized Rutherford equation for nonlinear neoclassical tearing mode growth from 2D reduced MHD simulations

    NASA Astrophysics Data System (ADS)

    Westerhof, E.; de Blank, H. J.; Pratt, J.

    2016-03-01

    Two dimensional reduced MHD simulations of neoclassical tearing mode growth and suppression by ECCD are performed. The perturbation of the bootstrap current density and the EC drive current density perturbation are assumed to be functions of the perturbed flux surfaces. In the case of ECCD, this implies that the applied power is flux surface averaged to obtain the EC driven current density distribution. The results are consistent with predictions from the generalized Rutherford equation using common expressions for Δ \\text{bs}\\prime and Δ \\text{ECCD}\\prime . These expressions are commonly perceived to describe only the effect on the tearing mode growth of the helical component of the respective current perturbation acting through the modification of Ohm’s law. Our results show that they describe in addition the effect of the poloidally averaged current density perturbation which acts through modification of the tearing mode stability index. Except for modulated ECCD, the largest contribution to the mode growth comes from this poloidally averaged current density perturbation.

  8. Projected rates of psychological disorders and suicidality among soldiers based on simulations of matched general population data

    PubMed Central

    Gadermann, Anne M.; Gilman, Stephen E.; McLaughlin, Katie A.; Nock, Matthew K.; Petukhova, Maria; Sampson, Nancy A.; Kessler, Ronald C.

    2014-01-01

    Limited data are available on lifetime prevalence and age-of-onset distributions of psychological disorders and suicidal behaviors among Army personnel. We used simulation methods to approximate such estimates based on analysis of data from a U.S. national general population survey with the socio-demographic profile of U.S. Army personnel. Estimated lifetime prevalence of any DSM-IV anxiety, mood, behavior, or substance disorder in this sample was 53.1 percent (17.7 percent for mood disorders, 27.2 percent for anxiety disorders, 22.7 percent for behavior disorders, and 14.4 percent for substance disorders). The vast majority of cases had onsets prior to the expected age-of-enlistment if they were in the Army (91.6 percent). Lifetime prevalence was 14.2 percent for suicidal ideation, 5.4 percent for suicide plans, and 4.5 percent for suicide attempts. The proportion of estimated pre-enlistment onsets was between 68.4 percent (suicide plans) and 82.4 percent (suicidal ideation). Externalizing disorders with onsets prior to expected age-of-enlistment and internalizing disorders with onsets after expected age-of-enlistment significantly predicted post-enlistment suicide attempts, with population attributable risk proportions of 41.8 percent and 38.8 percent, respectively. Implications of these findings are discussed for interventions designed to screen, detect, and treat psychological disorders and suicidality in the Army. PMID:23025127

  9. Extension of the CHARMM General Force Field to sulfonyl-containing compounds and its utility in biomolecular simulations.

    PubMed

    Yu, Wenbo; He, Xibing; Vanommeslaeghe, Kenno; MacKerell, Alexander D

    2012-12-01

    Presented is an extension of the CHARMM General Force Field (CGenFF) to enable the modeling of sulfonyl-containing compounds. Model compounds containing chemical moieties such as sulfone, sulfonamide, sulfonate, and sulfamate were used as the basis for the parameter optimization. Targeting high-level quantum mechanical and experimental crystal data, the new parameters were optimized in a hierarchical fashion designed to maintain compatibility with the remainder of the CHARMM additive force field. The optimized parameters satisfactorily reproduced equilibrium geometries, vibrational frequencies, interactions with water, gas phase dipole moments, and dihedral potential energy scans. Validation involved both crystalline and liquid phase calculations showing the newly developed parameters to satisfactorily reproduce experimental unit cell geometries, crystal intramolecular geometries, and pure solvent densities. The force field was subsequently applied to study conformational preference of a sulfonamide based peptide system. Good agreement with experimental IR/NMR data further validated the newly developed CGenFF parameters as a tool to investigate the dynamic behavior of sulfonyl groups in a biological environment. CGenFF now covers sulfonyl group containing moieties allowing for modeling and simulation of sulfonyl-containing compounds in the context of biomolecular systems including compounds of medicinal interest. PMID:22821581

  10. Efficacy of human papillomavirus 16 and 18 (HPV-16/18) AS04-adjuvanted vaccine against cervical infection and precancer in young women: final event-driven analysis of the randomized, double-blind PATRICIA trial.

    PubMed

    Apter, Dan; Wheeler, Cosette M; Paavonen, Jorma; Castellsagué, Xavier; Garland, Suzanne M; Skinner, S Rachel; Naud, Paulo; Salmerón, Jorge; Chow, Song-Nan; Kitchener, Henry C; Teixeira, Julio C; Jaisamrarn, Unnop; Limson, Genara; Szarewski, Anne; Romanowski, Barbara; Aoki, Fred Y; Schwarz, Tino F; Poppe, Willy A J; Bosch, F Xavier; Mindel, Adrian; de Sutter, Philippe; Hardt, Karin; Zahaf, Toufik; Descamps, Dominique; Struyf, Frank; Lehtinen, Matti; Dubin, Gary

    2015-04-01

    We report final event-driven analysis data on the immunogenicity and efficacy of the human papillomavirus 16 and 18 ((HPV-16/18) AS04-adjuvanted vaccine in young women aged 15 to 25 years from the PApilloma TRIal against Cancer In young Adults (PATRICIA). The total vaccinated cohort (TVC) included all randomized participants who received at least one vaccine dose (vaccine, n = 9,319; control, n = 9,325) at months 0, 1, and/or 6. The TVC-naive (vaccine, n = 5,822; control, n = 5,819) had no evidence of high-risk HPV infection at baseline, approximating adolescent girls targeted by most HPV vaccination programs. Mean follow-up was approximately 39 months after the first vaccine dose in each cohort. At baseline, 26% of women in the TVC had evidence of past and/or current HPV-16/18 infection. HPV-16 and HPV-18 antibody titers postvaccination tended to be higher among 15- to 17-year-olds than among 18- to 25-year-olds. In the TVC, vaccine efficacy (VE) against cervical intraepithelial neoplasia grade 1 or greater (CIN1+), CIN2+, and CIN3+ associated with HPV-16/18 was 55.5% (96.1% confidence interval [CI], 43.2, 65.3), 52.8% (37.5, 64.7), and 33.6% (-1.1, 56.9). VE against CIN1+, CIN2+, and CIN3+ irrespective of HPV DNA was 21.7% (10.7, 31.4), 30.4% (16.4, 42.1), and 33.4% (9.1, 51.5) and was consistently significant only in 15- to 17-year-old women (27.4% [10.8, 40.9], 41.8% [22.3, 56.7], and 55.8% [19.2, 76.9]). In the TVC-naive, VE against CIN1+, CIN2+, and CIN3+ associated with HPV-16/18 was 96.5% (89.0, 99.4), 98.4% (90.4, 100), and 100% (64.7, 100), and irrespective of HPV DNA it was 50.1% (35.9, 61.4), 70.2% (54.7, 80.9), and 87.0% (54.9, 97.7). VE against 12-month persistent infection with HPV-16/18 was 89.9% (84.0, 94.0), and that against HPV-31/33/45/51 was 49.0% (34.7, 60.3). In conclusion, vaccinating adolescents before sexual debut has a substantial impact on the overall incidence of high-grade cervical abnormalities, and catch-up vaccination up to 18 years

  11. Efficacy of Human Papillomavirus 16 and 18 (HPV-16/18) AS04-Adjuvanted Vaccine against Cervical Infection and Precancer in Young Women: Final Event-Driven Analysis of the Randomized, Double-Blind PATRICIA Trial

    PubMed Central

    Wheeler, Cosette M.; Paavonen, Jorma; Castellsagué, Xavier; Garland, Suzanne M.; Skinner, S. Rachel; Naud, Paulo; Salmerón, Jorge; Chow, Song-Nan; Kitchener, Henry C.; Teixeira, Julio C.; Jaisamrarn, Unnop; Limson, Genara; Szarewski, Anne; Romanowski, Barbara; Aoki, Fred Y.; Schwarz, Tino F.; Poppe, Willy A. J.; Bosch, F. Xavier; Mindel, Adrian; de Sutter, Philippe; Hardt, Karin; Zahaf, Toufik; Descamps, Dominique; Struyf, Frank; Lehtinen, Matti; Dubin, Gary

    2015-01-01

    We report final event-driven analysis data on the immunogenicity and efficacy of the human papillomavirus 16 and 18 ((HPV-16/18) AS04-adjuvanted vaccine in young women aged 15 to 25 years from the PApilloma TRIal against Cancer In young Adults (PATRICIA). The total vaccinated cohort (TVC) included all randomized participants who received at least one vaccine dose (vaccine, n = 9,319; control, n = 9,325) at months 0, 1, and/or 6. The TVC-naive (vaccine, n = 5,822; control, n = 5,819) had no evidence of high-risk HPV infection at baseline, approximating adolescent girls targeted by most HPV vaccination programs. Mean follow-up was approximately 39 months after the first vaccine dose in each cohort. At baseline, 26% of women in the TVC had evidence of past and/or current HPV-16/18 infection. HPV-16 and HPV-18 antibody titers postvaccination tended to be higher among 15- to 17-year-olds than among 18- to 25-year-olds. In the TVC, vaccine efficacy (VE) against cervical intraepithelial neoplasia grade 1 or greater (CIN1+), CIN2+, and CIN3+ associated with HPV-16/18 was 55.5% (96.1% confidence interval [CI], 43.2, 65.3), 52.8% (37.5, 64.7), and 33.6% (−1.1, 56.9). VE against CIN1+, CIN2+, and CIN3+ irrespective of HPV DNA was 21.7% (10.7, 31.4), 30.4% (16.4, 42.1), and 33.4% (9.1, 51.5) and was consistently significant only in 15- to 17-year-old women (27.4% [10.8, 40.9], 41.8% [22.3, 56.7], and 55.8% [19.2, 76.9]). In the TVC-naive, VE against CIN1+, CIN2+, and CIN3+ associated with HPV-16/18 was 96.5% (89.0, 99.4), 98.4% (90.4, 100), and 100% (64.7, 100), and irrespective of HPV DNA it was 50.1% (35.9, 61.4), 70.2% (54.7, 80.9), and 87.0% (54.9, 97.7). VE against 12-month persistent infection with HPV-16/18 was 89.9% (84.0, 94.0), and that against HPV-31/33/45/51 was 49.0% (34.7, 60.3). In conclusion, vaccinating adolescents before sexual debut has a substantial impact on the overall incidence of high-grade cervical abnormalities, and catch-up vaccination up to 18

  12. Finite-Frequency Simulations of Core-Reflected Seismic Waves to Assess Models of General Lower Mantle Anisotropy

    NASA Astrophysics Data System (ADS)

    Nowacki, A.; Walker, A. M.; Wookey, J.; Kendall, J.

    2012-12-01

    The core-mantle boundary (CMB) region is the site of the largest change in properties in the Earth. Moreover, the lowermost mantle above it (known as D″) shows the largest lateral variations in seismic velocity and strength of seismic anisotropy below the upper mantle. It is therefore vital to be able to accurately forward model candidate structures in the lowermost mantle with realistic sensitivity to structure and at the same frequencies at which observations are made. We use the spectral finite-element method to produce synthetic seismograms of ScS waves traversing a model of D″ anisotropy derived from mineralogical texture calculations and show that the seismic discontinuity atop the lowermost mantle varies in character laterally purely as a function of the strength and orientation of anisotropy. The lowermost mantle is widely anisotropic, shown by numerous shear wave splitting studies using waves of dominant frequency ~0.2-1 Hz. Whilst methods exist to model the finite-frequency seismic response of the lowermost mantle, most make the problem computationally efficient by imposing a certain symmetry to the problem, and of those which do not, almost none allow for completely general elasticity. Where low frequencies are simulated to reduce computational cost, it is uncertain whether waves of that frequency have comparable sensitivity to D″ structure as those observed at shorter periods. Currently, therefore, these computational limitations precludes the ability to interpret our observations fully. We present recent developments in taking a general approach to forward-modelling waves in D″. We use a modified version of SPECFEM3D_GLOBE, which uses the spectral finite-element method to model seismic wave propagation in a fully generally-elastic (i.e., 3D-varying, arbitrarily anisotropic) Earth. The calculations are computationally challenging: to approach the frequency of the observations, up to 10,000 processor cores and up to 2 TB of memory are needed. The

  13. Simulator study of the stall departure characteristics of a light general aviation airplane with and without a wing-leading-edge modification

    NASA Technical Reports Server (NTRS)

    Riley, D. R.

    1985-01-01

    A six-degree-of-freedom nonlinear simulation was developed for a two-place, single-engine, low-wing general aviation airplane for the stall and initial departure regions of flight. Two configurations, one with and one without an outboard wing-leading-edge modification, were modeled. The math models developed are presented simulation predictions and flight-test data for validation purposes and simulation results for the two configurations for various maneuvers and power settings are compared to show the beneficial influence of adding the wing-leading-edge modification.

  14. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-05-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of

  15. Cross-diffusion-driven hydrodynamic instabilities in a double-layer system: General classification and nonlinear simulations

    NASA Astrophysics Data System (ADS)

    Budroni, M. A.

    2015-12-01

    Cross diffusion, whereby a flux of a given species entrains the diffusive transport of another species, can trigger buoyancy-driven hydrodynamic instabilities at the interface of initially stable stratifications. Starting from a simple three-component case, we introduce a theoretical framework to classify cross-diffusion-induced hydrodynamic phenomena in two-layer stratifications under the action of the gravitational field. A cross-diffusion-convection (CDC) model is derived by coupling the fickian diffusion formalism to Stokes equations. In order to isolate the effect of cross-diffusion in the convective destabilization of a double-layer system, we impose a starting concentration jump of one species in the bottom layer while the other one is homogeneously distributed over the spatial domain. This initial configuration avoids the concurrence of classic Rayleigh-Taylor or differential-diffusion convective instabilities, and it also allows us to activate selectively the cross-diffusion feedback by which the heterogeneously distributed species influences the diffusive transport of the other species. We identify two types of hydrodynamic modes [the negative cross-diffusion-driven convection (NCC) and the positive cross-diffusion-driven convection (PCC)], corresponding to the sign of this operational cross-diffusion term. By studying the space-time density profiles along the gravitational axis we obtain analytical conditions for the onset of convection in terms of two important parameters only: the operational cross-diffusivity and the buoyancy ratio, giving the relative contribution of the two species to the global density. The general classification of the NCC and PCC scenarios in such parameter space is supported by numerical simulations of the fully nonlinear CDC problem. The resulting convective patterns compare favorably with recent experimental results found in microemulsion systems.

  16. Increased neuronal firing in computer simulations of sodium channel mutations that cause generalized epilepsy with febrile seizures plus.

    PubMed

    Spampanato, Jay; Aradi, Ildiko; Soltesz, Ivan; Goldin, Alan L

    2004-05-01

    Generalized epilepsy with febrile seizures plus (GEFS+) is an autosomal dominant familial syndrome with a complex seizure phenotype. It is caused by mutations in one of 3 voltage-gated sodium channel subunit genes (SCN1B, SCN1A, and SCN2A) and the GABA(A) receptor gamma2 subunit gene (GBRG2). The biophysical characterization of 3 mutations (T875M, W1204R, and R1648H) in SCN1A, the gene encoding the CNS voltage-gated sodium channel alpha subunit Na(v)1.1, demonstrated a variety of functional effects. The T875M mutation enhanced slow inactivation, the W1204R mutation shifted the voltage dependency of activation and inactivation in the negative direction, and the R1648H mutation accelerated recovery from inactivation. To determine how these changes affect neuronal firing, we used the NEURON simulation software to design a computational model based on the experimentally determined properties of each GEFS+ mutant sodium channel and a delayed rectifier potassium channel. The model predicted that W1204R decreased the threshold, T875M increased the threshold, and R1648H did not affect the threshold for firing a single action potential. Despite the different effects on the threshold for firing a single action potential, all of the mutations resulted in an increased propensity to fire repetitive action potentials. In addition, each mutation was capable of driving repetitive firing in a mixed population of mutant and wild-type channels, consistent with the dominant nature of these mutations. These results suggest a common physiological mechanism for epileptogenesis resulting from sodium channel mutations that cause GEFS+. PMID:14702334

  17. Cross-diffusion-driven hydrodynamic instabilities in a double-layer system: General classification and nonlinear simulations.

    PubMed

    Budroni, M A

    2015-12-01

    Cross diffusion, whereby a flux of a given species entrains the diffusive transport of another species, can trigger buoyancy-driven hydrodynamic instabilities at the interface of initially stable stratifications. Starting from a simple three-component case, we introduce a theoretical framework to classify cross-diffusion-induced hydrodynamic phenomena in two-layer stratifications under the action of the gravitational field. A cross-diffusion-convection (CDC) model is derived by coupling the fickian diffusion formalism to Stokes equations. In order to isolate the effect of cross-diffusion in the convective destabilization of a double-layer system, we impose a starting concentration jump of one species in the bottom layer while the other one is homogeneously distributed over the spatial domain. This initial configuration avoids the concurrence of classic Rayleigh-Taylor or differential-diffusion convective instabilities, and it also allows us to activate selectively the cross-diffusion feedback by which the heterogeneously distributed species influences the diffusive transport of the other species. We identify two types of hydrodynamic modes [the negative cross-diffusion-driven convection (NCC) and the positive cross-diffusion-driven convection (PCC)], corresponding to the sign of this operational cross-diffusion term. By studying the space-time density profiles along the gravitational axis we obtain analytical conditions for the onset of convection in terms of two important parameters only: the operational cross-diffusivity and the buoyancy ratio, giving the relative contribution of the two species to the global density. The general classification of the NCC and PCC scenarios in such parameter space is supported by numerical simulations of the fully nonlinear CDC problem. The resulting convective patterns compare favorably with recent experimental results found in microemulsion systems. PMID:26764804

  18. A general algorithm for magnetic resonance imaging simulation: a versatile tool to collect information about imaging artefacts and new acquisition techniques.

    PubMed

    Placidi, Giuseppe; Alecci, Marcello; Sotgiu, Antonello

    2002-01-01

    An innovative algorithm for Magnetic Resonance Imaging (MRI) capable of demonstrating the source of various artefacts and driving the hardware and software acquisition process is presented. The algorithm is based on the application of the Bloch equations to the magnetization vector of each point of the simulated object, as requested by the instructions of the MRI pulse sequence. The collected raw data are then used to reconstruct the image of the object. The general structure of the algorithm makes it possible to simulate a great range of imaging situations in order to explain the nature of unwanted artefacts and to study new acquisition techniques. The way the algorithm structures the sequence has also allowed the easy implementation of MRI data acquisition on a commercial general-purpose DSP-based data acquisition board, thus facilitating the comparison between simulated and experimental results. PMID:15460653

  19. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  20. An adaptive maneuvering logic computer program for the simulation of one-on-one air-to-air combat. Volume 1: General description

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Fogel, L. J.; Phelps, J. P.

    1975-01-01

    A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.

  1. Gradient Theory simulations of pure fluid interfaces using a generalized expression for influence parameters and a Helmholtz energy equation of state for fundamentally consistent two-phase calculations

    SciTech Connect

    Dahms, Rainer N.

    2014-12-31

    The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phase components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of previous

  2. Gradient Theory simulations of pure fluid interfaces using a generalized expression for influence parameters and a Helmholtz energy equation of state for fundamentally consistent two-phase calculations

    DOE PAGESBeta

    Dahms, Rainer N.

    2014-12-31

    The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phasemore » components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of

  3. A new technique for simulating composite material. Task 2: Analytical solutions with Generalized Impedance Boundary Conditions (GIBCs)

    NASA Technical Reports Server (NTRS)

    Ricoy, M. A.; Volakis, J. L.

    1989-01-01

    The diffraction problem associated with a multilayer material slab recessed in a perfectly conducting ground plane is formulated and solved via the Generalized Scattering Matrix Formulation (GSMF) in conjunction with the dual integral equation approach. The multilayer slab is replaced by a surface obeying a generalized impedance boundary condition (GIBC) to facilitate the computation of the pertinent Wiener Hopf split functions and their zeros. Both E(sub z) and H(sub z) polarizations are considered and a number of scattering patterns are presented, some of which are compared to exact results available for a homogeneous recessed slab.

  4. Responses of the Tropical Pacific to Wind Forcing as Observed by Spaceborne Sensors and Simulated by an Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Tang, Qenqing; Atlas, Robert

    1996-01-01

    In this study, satellite observations, in situ measurements, and model simulations are combined to assess the oceanic response to surface wind forcing in the equatorial Pacific. The surface wind fields derived from observations by the spaceborne special sensor microwave imager (SSM/I) and from the operational products of the European Centre for Medium-Range Weather Forecasts (ECMWF) are compared. When SSM/I winds are used to force a primitive-equation ocean general circulation model (OGCM), they produce 3 C more surface cooling than ECMWF winds for the eastern equatorial Pacific during the cool phase of an El Nino-Southern Oscillation event. The stronger cooling by SSM/I winds is in good agreement with measurements at the moored buoys and observations by the advanced very high resolution radiometer, indicating that SSM/I winds are superior to ECMWF winds in forcing the tropical ocean. In comparison with measurements from buoys, tide gauges, and the Geosat altimeter, the OGCM simulates the temporal variations of temperature, steric, and sea level changes with reasonable realism when forced with the satellite winds. There are discrepancies between model simulations and observations that are common to both wind forcing fields, one of which is the simulation of zonal currents; they could be attributed to model deficiencies. By examining model simulations under two winds, vertical heat advection and uplifting of the thermocline are found to be the dominant factors in the anomalous cooling of the ocean mixed layer.

  5. Implementation of a generalized actuator disk wind turbine model into the weather research and forecasting model for large-eddy simulation applications

    SciTech Connect

    Mirocha, J. D.; Kosovic, B.; Aitken, M. L.; Lundquist, J. K.

    2014-01-10

    A generalized actuator disk (GAD) wind turbine parameterization designed for large-eddy simulation (LES) applications was implemented into the Weather Research and Forecasting (WRF) model. WRF-LES with the GAD model enables numerical investigation of the effects of an operating wind turbine on and interactions with a broad range of atmospheric boundary layer phenomena. Numerical simulations using WRF-LES with the GAD model were compared with measurements obtained from the Turbine Wake and Inflow Characterization Study (TWICS-2011), the goal of which was to measure both the inflow to and wake from a 2.3-MW wind turbine. Data from a meteorological tower and two light-detection and ranging (lidar) systems, one vertically profiling and another operated over a variety of scanning modes, were utilized to obtain forcing for the simulations, and to evaluate characteristics of the simulated wakes. Simulations produced wakes with physically consistent rotation and velocity deficits. Two surface heat flux values of 20 W m–2 and 100 W m–2 were used to examine the sensitivity of the simulated wakes to convective instability. Simulations using the smaller heat flux values showed good agreement with wake deficits observed during TWICS-2011, whereas those using the larger value showed enhanced spreading and more-rapid attenuation. This study demonstrates the utility of actuator models implemented within atmospheric LES to address a range of atmospheric science and engineering applications. In conclusion, validated implementation of the GAD in a numerical weather prediction code such as WRF will enable a wide range of studies related to the interaction of wind turbines with the atmosphere and surface.

  6. Comparative Analysis of Simulated Annealing (SA) and Simplified Generalized SA (SGSA) for Estimation Optimal of Parametric Functional in CATIVIC

    SciTech Connect

    Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando

    2009-08-13

    Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.

  7. A general circulation model simulation of the springtime Antarctic ozone decrease and its impact on mid-latitudes

    SciTech Connect

    Cariolle, D.; Lasserre-Bigorry, A.; Royer, J.F. ); Geleyn, J.F. )

    1990-02-20

    Ozone is treated as an interactive variable calculated by means of a continuity equation which takes account of advection and photochemical production and loss. The ozone concentration is also used to compute the heating and cooling rates due to the absorption of solar ultraviolet radiation, and the infrared emission in the stratosphere. The daytime ozone decrease due to the perturbed chlorine chemistry found at high southern latitudes is introduced as an extra loss in the ozone continuity equation. Results of the perturbed simulation show a very good agreement with the ozone measurements made during spring 1987. The simulation also shows the development of a high-latitude anomalous circulation, with a warming of the upper stratosphere resulting mainly from dynamical heating. In addition, a substantial ozone decrease is found at mid-latitudes in a thin stratospheric layer located between the 390 and the 470 K {theta} surfaces. A significant residual ozone decrease is found at the end of the model integration, 7 months after the final warming and the vortex breakdown. If there is a significant residual ozone decrease in the atmosphere, the ozone trends predicted by photochemical models which do not take into account the high-latitude perturbed chemistry are clearly inadequate. Finally, it is concluded that further model simulations at higher horizontal resolution, possibly with a better representation of the heterogeneous chemistry, will be needed to evaluate with more confidence the magnitude of the mid-latitudinal ozone depletion induced by the ozone hole formation.

  8. Alternatives for Mixed-Effects Meta-Regression Models in the Reliability Generalization Approach: A Simulation Study

    ERIC Educational Resources Information Center

    López-López, José Antonio; Botella, Juan; Sánchez-Meca, Julio; Marín-Martínez, Fulgencio

    2013-01-01

    Since heterogeneity between reliability coefficients is usually found in reliability generalization studies, moderator analyses constitute a crucial step for that meta-analytic approach. In this study, different procedures for conducting mixed-effects meta-regression analyses were compared. Specifically, four transformation methods for the…

  9. Generalized DSS shell for developing simulation and optimization hydro-economic models of complex water resources systems

    NASA Astrophysics Data System (ADS)

    Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin

    2013-04-01

    Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such

  10. Assessing the ability of isotope-enabled General Circulation Models to simulate the variability of Iceland water vapor isotopic composition

    NASA Astrophysics Data System (ADS)

    Erla Sveinbjornsdottir, Arny; Steen-Larsen, Hans Christian; Jonsson, Thorsteinn; Ritter, Francois; Riser, Camilla; Messon-Delmotte, Valerie; Bonne, Jean Louis; Dahl-Jensen, Dorthe

    2014-05-01

    During the fall of 2010 we installed an autonomous water vapor spectroscopy laser (Los Gatos Research analyzer) in a lighthouse on the Southwest coast of Iceland (63.83°N, 21.47°W). Despite initial significant problems with volcanic ash, high wind, and attack of sea gulls, the system has been continuously operational since the end of 2011 with limited down time. The system automatically performs calibration every 2 hours, which results in high accuracy and precision allowing for analysis of the second order parameter, d-excess, in the water vapor. We find a strong linear relationship between d-excess and local relative humidity (RH) when normalized to SST. The observed slope of approximately -45 o/oo/% is similar to theoretical predictions by Merlivat and Jouzel [1979] for smooth surface, but the calculated intercept is significant lower than predicted. Despite this good linear agreement with theoretical calculations, mismatches arise between the simulated seasonal cycle of water vapour isotopic composition using LMDZiso GCM nudged to large-scale winds from atmospheric analyses, and our data. The GCM is not able to capture seasonal variations in local RH, nor seasonal variations in d-excess. Based on daily data, the performance of LMDZiso to resolve day-to-day variability is measured based on the strength of the correlation coefficient between observations and model outputs. This correlation coefficient reaches ~0.8 for surface absolute humidity, but decreases to ~0.6 for δD and ~0.45 d-excess. Moreover, the magnitude of day-to-day humidity variations is also underestimated by LMDZiso, which can explain the underestimated magnitude of isotopic depletion. Finally, the simulated and observed d-excess vs. RH has similar slopes. We conclude that the under-estimation of d-excess variability may partly arise from the poor performance of the humidity simulations.

  11. GARROTXA Cosmological Simulations of Milky Way-sized Galaxies: General Properties, Hot-gas Distribution, and Missing Baryons

    NASA Astrophysics Data System (ADS)

    Roca-Fàbrega, Santi; Valenzuela, Octavio; Colín, Pedro; Figueras, Francesca; Krongold, Yair; Velázquez, Héctor; Avila-Reese, Vladimir; Ibarra-Medel, Hector

    2016-06-01

    We introduce a new set of simulations of Milky Way (MW)-sized galaxies using the AMR code ART + hydrodynamics in a Λ cold dark matter cosmogony. The simulation series is called GARROTXA and it follows the formation of a halo/galaxy from z = 60 to z = 0. The final virial mass of the system is ∼7.4 × 1011 M ⊙. Our results are as follows. (a) Contrary to many previous studies, the circular velocity curve shows no central peak and overall agrees with recent MW observations. (b) Other quantities, such as M\\_\\ast (6 × 1010 M ⊙) and R d (2.56 kpc), fall well inside the observational MW range. (c) We measure the disk-to-total ratio kinematically and find that D/T = 0.42. (d) The cold-gas fraction and star formation rate at z = 0, on the other hand, fall short of the values estimated for the MW. As a first scientific exploitation of the simulation series, we study the spatial distribution of hot X-ray luminous gas. We have found that most of this X-ray emitting gas is in a halo-like distribution accounting for an important fraction but not all of the missing baryons. An important amount of hot gas is also present in filaments. In all our models there is not a massive disk-like hot-gas distribution dominating the column density. Our analysis of hot-gas mock observations reveals that the homogeneity assumption leads to an overestimation of the total mass by factors of 3–5 or to an underestimation by factors of 0.7–0.1, depending on the used observational method. Finally, we confirm a clear correlation between the total hot-gas mass and the dark matter halo mass of galactic systems.

  12. Response to ``Comment on `Scaling of asymmetric magnetic reconnection: General theory and collisional simulations' '' [Phys. Plasmas 16, 034701 (2009)

    NASA Astrophysics Data System (ADS)

    Cassak, P. A.; Shay, M. A.

    2009-03-01

    The comment by Semenov et al. has called into question our derivation of the outflow velocity in asymmetric magnetic reconnection. We present three reasons that the analysis presented in the comment is incorrect. Most importantly, the authors of the comment have incorrectly applied results from one-dimensional shock theory to the problem of conservation through a two-dimensional dissipation region. For completeness, we compare their predictions to numerical simulation results, finding that their theory does not describe the data. We conclude the analysis in the comment is without merit.

  13. Generalized mean-field approach to simulate the dynamics of large open spin ensembles with long range interactions

    NASA Astrophysics Data System (ADS)

    Krämer, Sebastian; Ritsch, Helmut

    2015-12-01

    We numerically study the collective coherent and dissipative dynamics in spin lattices with long range interactions in one, two and three dimensions. For generic geometric configurations with a small spin number, which are fully solvable numerically, we show that a dynamical mean-field approach based upon a spatial factorization of the density operator often gives a surprisingly accurate representation of the collective dynamics. Including all pair correlations at any distance in the spirit of a second order cumulant expansion improves the numerical accuracy by at least one order of magnitude. We then apply this truncated expansion method to simulate large numbers of spins from about ten in the case of the full quantum model, a few thousand, if all pair correlations are included, up to several ten-thousands in the mean-field approximation. We find collective modifications of the spin dynamics in surprisingly large system sizes. In 3D, the mutual interaction strength does not converge to a desired accuracy within the maximum system sizes we can currently implement. Extensive numerical tests help in identifying interaction strengths and geometric configurations where our approximations perform well and allow us to state fairly simple error estimates. By simulating systems of increasing size we show that in one and two dimensions we can include as many spins as needed to capture the properties of infinite size systems with high accuracy. As a practical application our approach is well suited to provide error estimates for atomic clock setups or super radiant lasers using magic wavelength optical lattices.

  14. The balance of kinetic and total energy simulated by the OSU two-level atmospheric general circulation model for January and July

    NASA Technical Reports Server (NTRS)

    Wang, J.-T.; Gates, W. L.; Kim, J.-W.

    1984-01-01

    A three-year simulation which prescribes seasonally varying solar radiation and sea surface temperature is the basis of the present study of the horizontal structure of the balances of kinetic and total energy simulated by Oregon State University's two-level atmospheric general circulation model. Mechanisms responsible for the local energy changes are identified, and the energy balance requirement's fulfilment is examined. In January, the vertical integral of the total energy shows large amounts of external heating over the North Pacific and Atlantic, together with cooling over most of the land area of the Northern Hemisphere. In July, an overall seasonal reversal is found. Both seasons are also characterized by strong energy flux divergence in the tropics, in association with the poleward transport of heat and momentum.

  15. Flight test of a stall sensor and evaluation of its application to an aircraft stall deterrent system using the NASA LRC general aviation simulator

    NASA Technical Reports Server (NTRS)

    Bennett, G.

    1976-01-01

    A series of flight maneuvers were developed to cover the range of flight conditions and to define the repeatability and hysteresis of the sensors. Initial flights were made with two sensors at the + or - 68 percent span and 60 percent and 70 percent chord stations. The primary effort in simulation program development was to modify the LRC General Aviation Simulator (GAS) Fortran programs to allow execution on the MSU UNIVAC 1106. A simple model of the sensor-servo stall deterrent system was developed. A one degree of freedom model of pitch dynamics of the airplane and stall deterrent system was developed to make initial estimates of the control system gains. A position error plus rate damping control algorithm was found to have acceptable characteristics.

  16. Brief Report: Simulations Suggest Heterogeneous Category Learning and Generalization in Children with Autism is a Result of Idiosyncratic Perceptual Transformations.

    PubMed

    Mercado, Eduardo; Church, Barbara A

    2016-08-01

    Children with autism spectrum disorder (ASD) sometimes have difficulties learning categories. Past computational work suggests that such deficits may result from atypical representations in cortical maps. Here we use neural networks to show that idiosyncratic transformations of inputs can result in the formation of feature maps that impair category learning for some inputs, but not for other closely related inputs. These simulations suggest that large inter- and intra-individual variations in learning capacities shown by children with ASD across similar categorization tasks may similarly result from idiosyncratic perceptual encoding that is resistant to experience-dependent changes. If so, then both feedback- and exposure-based category learning should lead to heterogeneous, stimulus-dependent deficits in children with ASD. PMID:27193184

  17. GOOSE, a generalized object-oriented simulation environment for developing and testing reactor models and control strategies

    SciTech Connect

    Ford, C.E.; March-Leuba, C. ); Guimaraes, L.; Ugolini, D. . Dept. of Nuclear Engineering)

    1991-01-01

    GOOSE, prototype software for a fully interactive, object-oriented simulation environment, is being developed as part of the Advanced Controls Program at Oak Ridge National Laboratory. Dynamic models may easily be constructed and tested; fully interactive capabilities allow the user to alter model parameters and complexity without recompilation. This environment provides access to powerful tools, such as numerical integration packages, graphical displays, and online help. Portability has bee an important design goal; the system was written in Objective-C in order to run on a wide variety of computers and operating systems, including UNIX workstations and personnel computers. A detailed library of nuclear reactor components, currently under development, will also be described. 5 refs., 4 figs.

  18. Hybrid MPI/OpenMP Implementation of the ORAC Molecular Dynamics Program for Generalized Ensemble and Fast Switching Alchemical Simulations.

    PubMed

    Procacci, Piero

    2016-06-27

    We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac . PMID:27231982

  19. Variational Symplectic Integrator for Long-Time Simulations of the Guiding-Center Motion of Charged Particles in General Magnetic Fields

    SciTech Connect

    H. Qin and X. Guan

    2008-02-11

    A variational symplectic integrator for the guiding-center motion of charged particles in general magnetic fields is developed for long-time simulation studies of magnetized plasmas. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing the dynamics. The variational symplectic integrator conserves exactly a discrete Lagrangian symplectic structure, and has better numerical properties over long integration time, compared with standard integrators, such as the standard and variable time-step fourth order Runge-Kutta methods.

  20. Variational symplectic integrator for long-time simulations of the guiding-center motion of charged particles in general magnetic fields.

    PubMed

    Qin, Hong; Guan, Xiaoyin

    2008-01-25

    A variational symplectic integrator for the guiding-center motion of charged particles in general magnetic fields is developed for long-time simulation studies of magnetized plasmas. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing the dynamics. The variational symplectic integrator conserves exactly a discrete Lagrangian symplectic structure, and has better numerical properties over long integration time, compared with standard integrators, such as the standard and variable time-step fourth order Runge-Kutta methods. PMID:18232993

  1. Sediment deposition from tropical storms in the upper Chesapeake Bay: Field observations and model simulations

    NASA Astrophysics Data System (ADS)

    Palinkas, Cindy M.; Halka, Jeffrey P.; Li, Ming; Sanford, Lawrence P.; Cheng, Peng

    2014-09-01

    Episodic flood and storm events are important drivers of sediment dynamics in estuarine and marine environments. Event-driven sedimentation has been well-documented by field and modeling studies, though both techniques have inherent limitations. A unique opportunity to integrate field observations and model results was provided in late August/early September 2011 with the passage of Hurricane Irene and Tropical Storm Lee in the Chesapeake Bay region. Because these two storms occurred within a relatively short period of time, both are potentially represented in the sediment record obtained during rapid-response cruises in September and October 2011. Associated sediment deposits were recognized in cores using classic flood-sediment signatures (fine grain size, uniform 7Be activity, physical stratification in x-radiographs) and were found to be <4 cm, thickest in the upper Bay. A coupled hydrodynamic-sediment transport model is used to simulate the sediment plume and sediment deposition onto the seabed. The predicted deposition thickness for TS Lee is in general agreement with the observational estimates. One exception with physical stratification but no 7Be activity appears to be due to extreme wave activity during Hurricane Irene. Integration of observations and modeling in this case greatly improved understanding of the transport and fate of flood sediments in the Chesapeake Bay.

  2. Final report for "Development of generalized mapping tools to improve implementation of data driven computer simulations" (LDRD 04-ERD-083)

    SciTech Connect

    Pasyanos, M; Ramirez, A; Franz, G

    2005-02-04

    Probabilistic inverse techniques, like the Markov Chain Monte Carlo (MCMC) algorithm, have had recent success in combining disparate data types into a consistent model. The Stochastic Engine (SE) initiative was a technique that developed this method and applied it to a number of earth science and national security applications. For instance, while the method was originally developed to solve ground flow problems (Aines et al.), it has also been applied to atmospheric modeling and engineering problems. The investigators of this proposal have applied the SE to regional-scale lithospheric earth models, which have applications to hazard analysis and nuclear explosion monitoring. While this broad applicability is appealing, tailoring the method for each application is inefficient and time-consuming. Stochastic methods invert data by probabilistically sampling the model space and comparing observations predicted by the proposed model to observed data and preferentially accepting models that produce a good fit, generating a posterior distribution. In other words, the method ''inverts'' for a model or, more precisely, a distribution of models, by a series of forward calculations. While powerful, the technique is often challenging to implement, as the mapping from model space to data needs to be ''customized'' for each data type. For example, all proposed models might need to be transformed through sensitivity kernels from 3-D models to 2-D models in one step in order to compute path integrals, and transformed in a completely different manner in the next step. We seek technical enhancements that widen the applicability of the Stochastic Engine by generalizing some aspects of the method (i.e. model-to-data transformation types, configuration, model representation). Initially, we wish to generalize the transformations that are necessary to match the observations to proposed models. These transformations are sufficiently general not to pertain to any single application. This

  3. Comparison of protein solution structures refined by molecular dynamics simulation in vacuum, with a generalized Born model, and with explicit water.

    PubMed

    Xia, Bin; Tsui, Vickie; Case, David A; Dyson, H Jane; Wright, Peter E

    2002-04-01

    The inclusion of explicit solvent water in molecular dynamics refinement of NMR structures ought to provide the most physically meaningful accounting for the effects of solvent on structure, but is computationally expensive. In order to evaluate the validity of commonly used vacuum refinements and of recently developed continuum solvent model methods, we have used three different methods to refine a set of NMR solution structures of a medium sized protein, Escherichia coli glutaredoxin 2, from starting structures calculated using the program DYANA. The three different refinement protocols used molecular dynamics simulated annealing with the program AMBER in vacuum (VAC), including a generalized Born (GB) solvent model, and a full calculation including explicit solvent water (WAT). The structures obtained using the three methods of refinements were very similar, a reflection of their generally well-determined nature. However, the structures refined with the generalized Born model were more similar to those from explicit water refinement than those refined in vacuum. Significant improvement was seen in the percentage of backbone dihedral angles in the most favored regions of phi, psi space and in hydrogen bond pattern for structures refined with the GB and WAT models, compared with the structures refined in vacuum. The explicit water calculation took an average of 200 h of CPU time per structure on an SGI cluster, compared to 15-90 h for the GB calculation (depending on the parameters used) and 2 h for the vacuum calculation. The generalized Born solvent model proved to be an excellent compromise between the vacuum and explicit water refinements, giving results comparable to those of the explicit water calculation. Some improvement for phi and psi angle distribution and hydrogen bond pattern can also be achieved by energy minimizing the vacuum structures with the GB model, which takes a much shorter time than MD simulations with the GB model. PMID:12018480

  4. Estimating changes in temperature extremes from millennial-scale climate simulations using generalized extreme value (GEV) distributions

    NASA Astrophysics Data System (ADS)

    Huang, Whitney K.; Stein, Michael L.; McInerney, David J.; Sun, Shanshan; Moyer, Elisabeth J.

    2016-07-01

    Changes in extreme weather may produce some of the largest societal impacts of anthropogenic climate change. However, it is intrinsically difficult to estimate changes in extreme events from the short observational record. In this work we use millennial runs from the Community Climate System Model version 3 (CCSM3) in equilibrated pre-industrial and possible future (700 and 1400 ppm CO2) conditions to examine both how extremes change in this model and how well these changes can be estimated as a function of run length. We estimate changes to distributions of future temperature extremes (annual minima and annual maxima) in the contiguous United States by fitting generalized extreme value (GEV) distributions. Using 1000-year pre-industrial and future time series, we show that warm extremes largely change in accordance with mean shifts in the distribution of summertime temperatures. Cold extremes warm more than mean shifts in the distribution of wintertime temperatures, but changes in GEV location parameters are generally well explained by the combination of mean shifts and reduced wintertime temperature variability. For cold extremes at inland locations, return levels at long recurrence intervals show additional effects related to changes in the spread and shape of GEV distributions. We then examine uncertainties that result from using shorter model runs. In theory, the GEV distribution can allow prediction of infrequent events using time series shorter than the recurrence interval of those events. To investigate how well this approach works in practice, we estimate 20-, 50-, and 100-year extreme events using segments of varying lengths. We find that even using GEV distributions, time series of comparable or shorter length than the return period of interest can lead to very poor estimates. These results suggest caution when attempting to use short observational time series or model runs to infer infrequent extremes.

  5. Multiscale simulation of polymer nano-composites (PNC) using molecular dynamics (MD) and generalized interpolation material point method (GIMP)

    NASA Astrophysics Data System (ADS)

    Nair, Abilash R.

    Recent mechanical characterization experiments with pultruded E-Glass / polypropylene (PP) and compression molded E-Glass/Nylon-6 composite samples with 3-4 weight% nanoclay and baseline polymer (polymer without nanoclay) confirmed significant improvements in compressive strength (˜122%) and shear strength (˜60%) in the nanoclay modified nanocomposites, in comparison with baseline properties. Uniaxial tensile tests showed a small increase in tensile strength (˜3.4%) with 3 wt % nanoclay loading. While the synergistic reinforcing influence of nanoparticle reinforcement is obvious, a simple rule-of-mixtures approach fails to quantify the dramatic increase in mechanical properties. Consequently, there is an immediate need to investigate and understand the mechanisms at the nanoscale that are responsible for such unprecedented strength enhancements. In this work, an innovative and effective method to model nano-structured components in a thermoplastic polymer matrix is proposed. Effort will be directed towards finding fundamental answers to the reasons for significant changes in mechanical properties of nanoparticle-reinforced thermoplastic composites. This research ensues a multiscale modeling approach in which (a) a concurrent simulations scheme is developed to visualize atomistic behavior of polymer molecules as a function of continuum scale loading conditions and (b) a novel nanoscale damage mechanics model is proposed to capture the constitutive behavior of polymer nano composites (PNC). The proposed research will contribute towards the understanding of advanced nanostructured composite materials, which should subsequently benefit the composites manufacturing industry.

  6. A General 3-D Methodology for Quasi-Static Simulation of Drainage and Imbibition: Application to Highly Porous Fibrous Materials

    NASA Astrophysics Data System (ADS)

    Riasi, S.; Huang, G.; Montemagno, C.; Yeghiazarian, L.

    2013-12-01

    Micro-scale modeling of multiphase flow in porous media is critical to characterize porous materials. Several modeling techniques have been implemented to date, but none can be used as a general strategy for all porous media applications due to challenges presented by non-smooth high-curvature solid surfaces, and by a wide range of pore sizes and porosities. Finite approaches like the finite volume method require a high quality, problem-dependent mesh, while particle-based approaches like the lattice Boltzmann require too many particles to achieve a stable meaningful solution. Both come at a large computational cost. Other methods such as pore network modeling (PNM) have been developed to accelerate the solution process by simplifying the solution domain, but so far a unique and straightforward methodology to implement PNM is lacking. We have developed a general, stable and fast methodology to model multi-phase fluid flow in porous materials, irrespective of their porosity and solid phase topology. We have applied this methodology to highly porous fibrous materials in which void spaces are not distinctly separated, and where simplifying the geometry into a network of pore bodies and throats, as in PNM, does not result in a topology-consistent network. To this end, we have reduced the complexity of the 3-D void space geometry by working with its medial surface. We have used a non-iterative fast medial surface finder algorithm to determine a voxel-wide medial surface of the void space, and then solved the quasi-static drainage and imbibition on the resulting domain. The medial surface accurately represents the topology of the porous structure including corners, irregular cross sections, etc. This methodology is capable of capturing corner menisci and the snap-off mechanism numerically. It also allows for calculation of pore size distribution, permeability and capillary pressure-saturation-specific interfacial area surface of the porous structure. To show the

  7. The cyclonic circulation in the Australian-Antarctic basin simulated by an eddy-resolving general circulation model

    NASA Astrophysics Data System (ADS)

    Aoki, Shigeru; Sasai, Yoshikazu; Sasaki, Hideharu; Mitsudera, Humio; Williams, Guy D.

    2010-06-01

    Flow structure in the Australian-Antarctic basin is investigated using an eddy-resolving general ocean circulation model and validated with iceberg and middepth float trajectories. A cyclonic circulation system between the Antarctic Circumpolar Current and Antarctic Slope Current consists of a large-scale gyre in the west (80-110° E) and a series of eddies in the east (120-150° E). The western gyre has an annual mean westward transport of 22 Sv in the southern limb. Extending west through the Princess Elizabeth Trough, 5 Sv of the gyre recirculates off Prydz Bay and joins the western boundary current off the Kerguelen Plateau. Iceberg trajectories from QuickScat and ERS-1/2 support this recirculation and the overall structure of the Antarctic Slope Current against isobath in the model. Argo float trajectories also reveal a consistent structure of the deep westward slope current. This study indicates the presence of a large cyclonic circulation in this basin, which is comparable to the Weddell and Ross gyres.

  8. The northern wintertime divergence extrema at 200 hPa and surface cyclones as simulated in the AMIP integration of the ECMWF general circulation model

    SciTech Connect

    Boyle, J.S.

    1994-11-01

    Divergence and convergence centers at 200 hPa and mean sea level pressure (MSLP) cyclones were located every 6 hr for a 10-yr general circulation model (GCM) simulation with the ECMWF (Cycle 36) for the boreal winters from 1980 to 1988. The simulation used the observed monthly mean sea surface temperature (SST) for the decade. Analysis of the frequency, location, and strength of these centers and cyclones gives insight into the dynamical response of the model to the varying SST. The results indicate that (1) the model produces reasonable climatologies of upper-level divergence and MSLP cyclones; (2) the model distribution of anomalies of divergence and convergence centers and MSLP cyclones is consistent with observations for the 1982-83 and 1986-87 El Nifio events; (3) the tropical Indian Ocean is the region of greatest divergence activity and interannual variability in the model; (4) the variability of the divergence centers is greater than that of the convergence centers; (5) strong divergence centers occur chiefly over the ocean in the midlatitudes but are more land-based in the tropics, except in the Indian Ocean; and (6) locations of divergence and convergence centers can be a useful tool for the intercomparison of global atmospheric simulations.

  9. Simulations of Hurricane Katrina (2005) with the 0.125 degree finite-volume General Circulation Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Reale, O.; Lin, S.-J.; Chern, J.-D.; Chang, J.; Henze, C.

    2006-01-01

    Hurricane Katrina was the sixth most intense hurricane in the Atlantic. Katrina's forecast poses major challenges, the most important of which is its rapid intensification. Hurricane intensity forecast with General Circulation Models (GCMs) is difficult because of their coarse resolution. In this article, six 5-day simulations with the ultra-high resolution finite-volume GCM are conducted on the NASA Columbia supercomputer to show the effects of increased resolution on the intensity predictions of Katrina. It is found that the 0.125 degree runs give comparable tracks to the 0.25 degree, but provide better intensity forecasts, bringing the center pressure much closer to observations with differences of only plus or minus 12 hPa. In the runs initialized at 1200 UTC 25 AUG, the 0.125 degree simulates a more realistic intensification rate and better near-eye wind distributions. Moreover, the first global 0.125 degree simulation without convection parameterization (CP) produces even better intensity evolution and near-eye winds than the control run with CP.

  10. Patient-Specific Carotid Plaque Progression Simulation Using 3D Meshless Generalized Finite Difference Models with Fluid-Structure Interactions Based on Serial In Vivo MRI Data.

    PubMed

    Yang, Chun; Tang, Dalin; Atluri, Satya

    2011-01-01

    Previously, we introduced a computational procedure based on three-dimensional meshless generalized finite difference (MGFD) method and serial magnetic resonance imaging (MRI) data to quantify patient-specific carotid atherosclerotic plaque growth functions and simulate plaque progression. Structure-only models were used in our previous report. In this paper, fluid-stricture interaction (FSI) was added to improve on prediction accuracy. One participating patient was scanned three times (T1, T2, and T3, at intervals of about 18 months) to obtain plaque progression data. Blood flow was assumed to laminar, Newtonian, viscous and incompressible. The Navier-Stokes equations with arbitrary Lagrangian-Eulerian (ALE) formulation were used as the governing equations. Plaque material was assumed to be uniform, homogeneous, isotropic, linear, and nearly incompressible. The linear elastic model was used. The 3D FSI plaque model was discretized and solved using a meshless generalized finite difference (GFD) method. Growth functions with a) morphology alone; b) morphology and plaque wall stress (PWS); morphology and flow shear stress (FSS), and d) morphology, PWS and FSS were introduced to predict future plaque growth based on previous time point data. Starting from the T2 plaque geometry, plaque progression was simulated by solving the FSI model and adjusting plaque geometry using plaque growth functions iteratively until T3 is reached. Numerically simulated plaque progression agreed very well with the target T3 plaque geometry with errors ranging from 8.62%, 7.22%, 5.77% and 4.39%, with the growth function including morphology, plaque wall stress and flow shear stress terms giving the best predictions. Adding flow shear stress term to the growth function improved the prediction error from 7.22% to 4.39%, a 40% improvement. We believe this is the first time 3D plaque progression FSI simulation based on multi-year patient-tracking data was reported. Serial MRI-based progression

  11. Simulating Mars' Dust Cycle with a Mars General Circulation Model: Effects of Water Ice Cloud Formation on Dust Lifting Strength and Seasonality

    NASA Technical Reports Server (NTRS)

    Kahre, Melinda A.; Haberle, Robert; Hollingsworth, Jeffery L.

    2012-01-01

    The dust cycle is critically important for the current climate of Mars. The radiative effects of dust impact the thermal and dynamical state of the atmosphere [1,2,3]. Although dust is present in the Martian atmosphere throughout the year, the level of dustiness varies with season. The atmosphere is generally the dustiest during northern fall and winter and the least dusty during northern spring and summer [4]. Dust particles are lifted into the atmosphere by dust storms that range in size from meters to thousands of kilometers across [5]. Regional storm activity is enhanced before northern winter solstice (Ls200 degrees - 240 degrees), and after northern solstice (Ls305 degrees - 340 degrees ), which produces elevated atmospheric dust loadings during these periods [5,6,7]. These pre- and post- solstice increases in dust loading are thought to be associated with transient eddy activity in the northern hemisphere with cross-equatorial transport of dust leading to enhanced dust lifting in the southern hemisphere [6]. Interactive dust cycle studies with Mars General Circulation Models (MGCMs) have included the lifting, transport, and sedimentation of radiatively active dust. Although the predicted global dust loadings from these simulations capture some aspects of the observed dust cycle, there are marked differences between the simulated and observed dust cycles [8,9,10]. Most notably, the maximum dust loading is robustly predicted by models to occur near northern winter solstice and is due to dust lifting associated with down slope flows on the flanks of the Hellas basin. Thus far, models have had difficulty simulating the observed pre- and post- solstice peaks in dust loading.

  12. Kelvin waves and ozone Kelvin waves in the quasi-biennial oscillation and semiannual oscillation: A simulation by a high-resolution chemistry-coupled general circulation model

    NASA Astrophysics Data System (ADS)

    Watanabe, Shingo; Takahashi, Masaaki

    2005-09-01

    Equatorial Kelvin waves and ozone Kelvin waves were simulated by a T63L250 chemistry-coupled general circulation model with a high vertical resolution (300 m). The model produces a realistic quasi-biennial oscillation (QBO) and a semiannual oscillation (SAO) in the equatorial stratosphere. The QBO has a period slightly longer than 2 years, and the SAO shows rapid reversals from westerly to easterly regimes and gradual descents of westerlies. Results for the zonal wave number 1 slow and fast Kelvin waves are discussed. Structure of the waves and phase relationships between temperature and ozone perturbations coincide well with satellite observations made by LIMS, CLAES, and MLS. They are generally in phase (antiphase) in the lower (upper) stratosphere as theoretically expected. The fast Kelvin waves in the temperature and ozone are dominant in the upper stratosphere because the slow Kelvin waves are effectively filtered by the QBO westerly. In this simulation, the fast Kelvin waves encounter their critical levels in the upper stratosphere when zonal asymmetry of the SAO westerly is enhanced by an intrusion of the extratropical planetary waves. In addition to the critical level filtering effect, modulations of wave properties by background winds are evident near easterly and westerly shears associated with the QBO and SAO. Enhancement of wave amplitude in the QBO westerly shear is well coincident with radiosonde observations. Increase/decrease of vertical wavelength in the QBO easterly/westerly is obvious in this simulation, which is consistent with the linear wave theory. Shortening of wave period due to the descending QBO westerly shear zone is demonstrated for the first time. Moreover, dominant periods during the QBO westerly phase are longer than those during the QBO easterly phase for both the slow and fast Kelvin waves.

  13. Generalization of the paraxial trajectory method for the analysis of non-paraxial rays: simulation program G-optk for electron gun characterization.

    PubMed

    Fujita, Shin; Takebe, Masahiro; Ushio, Wataru; Shimoyama, Hiroshi

    2010-01-01

    The paraxial trajectory method has been generalized for the application to the cathode rays inside electron guns. The generalized method can handle rays that initially make a large angle with the optical axis with a satisfactory accuracy. The key to success of the generalization is the adoption of the trigonometric function sine for the trajectory slope specification, instead of the conventional use of the tangent. Formulas have been derived to relate the ray conditions (position and slope of the ray at reference planes) on the cathode to those at the crossover plane using third-order polynomial functions. Some of the polynomial coefficients can be used as the optical parameters in the characterization of electron sources; the electron gun focal length gives a quantitative estimate of both the crossover size and the angular current intensity. An electron gun simulation program G-optk has been developed based on the mathematical formulations presented in the article. The program calculates the principal paraxial trajectories and the relevant optical parameters from axial potentials and fields. It gives the electron-optical-column designers a clear physical picture of the electron gun in a much more faster way than the conventional ray-tracing methods. PMID:19654189

  14. Flight simulation study to determine MLS lateral course width requirements on final approach for general aviation. [runway conditions affecting microwave landing systems

    NASA Technical Reports Server (NTRS)

    Crumrine, R. J.

    1976-01-01

    An investigation of the effects of various lateral course widths and runway lengths for manual CAT I Microwave Landing System instrument approaches was carried out with instrument rated pilots in a General Aviation simulator. Data are presented on the lateral dispersion at the touchdown zone, and the middle and outer markers, for approaches to 3,000, 8,000 (and trial 12,000 foot) runway lengths with full scale angular lateral course widths of + or - 1.19 deg, + or - 2.35 deg, and + or - 3.63 deg. The distance from touchdown where the localizer deviation went to full scale was also recorded. Pilot acceptance was measured according to the Cooper-Harper rating system.

  15. Parental reflective functioning is associated with tolerance of infant distress but not general distress: Evidence for a specific relationship using a simulated baby paradigm

    PubMed Central

    Rutherford, Helena J.V.; Goldberg, Benjamin; Luyten, Patrick; Bridgett, David J.; Mayes, Linda C.

    2013-01-01

    Parental reflective functioning represents the capacity of a parent to think about their own and their child’s mental states and how these mental states may influence behavior. Here we examined whether this capacity as measured by the Parental Reflective Functioning Questionnaire relates to tolerance of infant distress by asking mothers (N=21) to soothe a life-like baby simulator (BSIM) that was inconsolable, crying for a fixed time period unless the mother chose to stop the interaction. Increasing maternal interest and curiosity in their child’s mental states, a key feature of parental reflective functioning, was associated with longer persistence times with the BSIM. Importantly, on a non-parent distress tolerance task, parental reflective functioning was not related to persistence times. These findings suggest that parental reflective functioning may be related to tolerance of infant distress, but not distress tolerance more generally, and thus may reflect specificity to parenting-specific persistence behavior. PMID:23906942

  16. A thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (time-GCM): Equinox solar cycle minimum simulations (30-500 km)

    SciTech Connect

    Roble, R.G.; Ridley, E.C.

    1994-03-15

    A new simulation model of the mesosphere, thermosphere, and ionosphere with coupled electrodynamics has been developed and used to calculate the global circulation, temperature and compositional structure between 30-500 km for equinox, solar cycle minimum, geomagnetic quiet conditions. The model incorporates all of the features of the NCAR thermosphere-ionosphere-electrodynamics general circulation model (TIE-GCM) but the lower boundary has been extended downward from 97 to 30 km (10 mb) and it includes the physical and chemical processes appropriate for the mesosphere and upper stratosphere. The first simulation used Rayleigh friction to represent gravity wave drag in the middle atmosphere and although it was able to close the mesospheric jets it severely damped the diurnal tide. Reduced Rayleigh friction allowed the tide to penetrate to thermospheric heights but did not close the jets. A gravity wave parameterization developed by Fritts and Lu allows both features to exist simultaneously with the structure of tides and mean flow dependent upon the strength of the gravity wave source. The model calculates a changing dynamic structure with the mean flow and diurnal tide dominant in the mesosphere, the in-situ generated semi-diurnal tide dominating the lower thermosphere and an in-situ generated diurnal tide in the upper thermosphere. The results also show considerable interaction between dynamics and composition, especially atomic oxygen between 85 and 120 km. 31 refs., 3 figs.

  17. General Relativistic Magnetohydrodynamic Simulations of Jets from Black Hole Accretions Disks: Two-Component Jets Driven by Nonsteady Accretion of Magnetized Disks

    NASA Astrophysics Data System (ADS)

    Koide, Shinji; Shibata, Kazunari; Kudoh, Takahiro

    1998-03-01

    The radio observations have revealed the compelling evidence of the existence of relativistic jets not only from active galactic nuclei but also from ``microquasars'' in our Galaxy. In the cores of these objects, it is believed that a black hole exists and that violent phenomena occur in the black hole magnetosphere, forming the relativistic jets. To simulate the jet formation in the magnetosphere, we have newly developed the general relativistic magnetohydrodynamic code. Using the code, we present a model of these relativistic jets, in which magnetic fields penetrating the accretion disk around a black hole play a fundamental role of inducing nonsteady accretion and ejection of plasmas. According to our simulations, a jet is ejected from a close vicinity to a black hole (inside 3rS, where rS is the Schwarzschild radius) at a maximum speed of ~90% of the light velocity (i.e., a Lorentz factor of ~2). The jet has a two-layered shell structure consisting of a fast gas pressure-driven jet in the inner part and a slow magnetically driven jet in the outer part, both of which are collimated by the global poloidal magnetic field penetrating the disk. The former jet is a result of a strong pressure increase due to shock formation in the disk through fast accretion flow (``advection-dominated disk'') inside 3rS, which has never been seen in the nonrelativistic calculations.

  18. A general approach to the electronic spin relaxation of Gd(III) complexes in solutions. Monte Carlo simulations beyond the Redfield limit

    NASA Astrophysics Data System (ADS)

    Rast, S.; Fries, P. H.; Belorizky, E.; Borel, A.; Helm, L.; Merbach, A. E.

    2001-10-01

    The time correlation functions of the electronic spin components of a metal ion without orbital degeneracy in solution are computed. The approach is based on the numerical solution of the time-dependent Schrödinger equation for a stochastic perturbing Hamiltonian which is simulated by a Monte Carlo algorithm using discrete time steps. The perturbing Hamiltonian is quite general, including the superposition of both the static mean crystal field contribution in the molecular frame and the usual transient ligand field term. The Hamiltonian of the static crystal field can involve the terms of all orders, which are invariant under the local group of the average geometry of the complex. In the laboratory frame, the random rotation of the complex is the only source of modulation of this Hamiltonian, whereas an additional Ornstein-Uhlenbeck process is needed to describe the time fluctuations of the Hamiltonian of the transient crystal field. A numerical procedure for computing the electronic paramagnetic resonance (EPR) spectra is proposed and discussed. For the [Gd(H2O)8]3+ octa-aqua ion and the [Gd(DOTA)(H2O)]- complex [DOTA=1,4,7,10-tetrakis(carboxymethyl)-1,4,7,10-tetraazacyclo dodecane] in water, the predictions of the Redfield relaxation theory are compared with those of the Monte Carlo approach. The Redfield approximation is shown to be accurate for all temperatures and for electronic resonance frequencies at and above X-band, justifying the previous interpretations of EPR spectra. At lower frequencies the transverse and longitudinal relaxation functions derived from the Redfield approximation display significantly faster decays than the corresponding simulated functions. The practical interest of this simulation approach is underlined.

  19. Mechanisms of Diurnal Precipitation over the United States Great Plains: A Cloud-Resolving Model Simulation

    NASA Technical Reports Server (NTRS)

    Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.

    2010-01-01

    The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.

  20. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    SciTech Connect

    Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  1. Extreme events-driven glassy behaviour in granular media

    NASA Astrophysics Data System (ADS)

    D'Anna, G.; Mayor, P.; Gremaud, G.; Barrat, A.; Loreto, V.

    2003-01-01

    Motivated by recent experiments on the approach to jamming of a weakly forced granular medium using an immersed torsion oscillator (Nature, 413 (2001) 407), we propose a simple model which relates the microscopic dynamics to macroscopic rearrangements and accounts for the following experimental facts: 1) the control parameter is the spatial amplitude of the perturbation and not its reduced peak acceleration; 2) a Vogel-Fulcher-Tammann like form for the relaxation time. The model draws a parallel between macroscopic rearrangements in the system and extreme events whose probability of occurrence (and thus the typical relaxation time) is estimated using extreme-value statistics. The range of validity of this description in terms of the control parameter is discussed as well as the existence of other regimes.

  2. Event-driven neural integration and synchronicity in analog VLSI.

    PubMed

    Yu, Theodore; Park, Jongkil; Joshi, Siddharth; Maier, Christoph; Cauwenberghs, Gert

    2012-01-01

    Synchrony and temporal coding in the central nervous system, as the source of local field potentials and complex neural dynamics, arises from precise timing relationships between spike action population events across neuronal assemblies. Recently it has been shown that coincidence detection based on spike event timing also presents a robust neural code invariant to additive incoherent noise from desynchronized and unrelated inputs. We present spike-based coincidence detection using integrate-and-fire neural membrane dynamics along with pooled conductance-based synaptic dynamics in a hierarchical address-event architecture. Within this architecture, we encode each synaptic event with parameters that govern synaptic connectivity, synaptic strength, and axonal delay with additional global configurable parameters that govern neural and synaptic temporal dynamics. Spike-based coincidence detection is observed and analyzed in measurements on a log-domain analog VLSI implementation of the integrate-and-fire neuron and conductance-based synapse dynamics. PMID:23366007

  3. Toward a theory of the general-anesthetic-induced phase transition of the cerebral cortex. II. Numerical simulations, spectral entropy, and correlation times.

    PubMed

    Steyn-Ross, D A; Steyn-Ross, M L; Wilcocks, L C; Sleigh, J W

    2001-07-01

    In our two recent papers [M.L. Steyn-Ross et al., Phys. Rev. E 60, 7299 (1999); 64, 011917 (2001)] we presented clinical evidence for a general anesthetic-induced phase change in the cerebral cortex, and showed how the significant features of the cortical phase change (biphasic power surge, spectral energy redistribution, "heat capacity" divergence), could be explained using a stochastic single-macrocolumn model of the cortex. The model predictions were based on rather strong "adiabatic" assumptions which assert that the mean-field excitatory and inhibitory macrocolumn voltages are "slow" variables whose equilibration times are much longer than those of the input "currents" that drive the macrocolumn. In the present paper we test the adiabatic assumption by running numerical simulations of the stochastic differential equations. These simulations confirm the number and nature of the steady-state solutions, the growth of fluctuation power at transition, and the redistribution of spectral energy towards lower frequencies. We use spectral entropy to quantify these changes in the power spectral density, and to show that the spectral entropy should decrease markedly at the point of transition. This prediction agrees with recent clinical findings by Viertiö-Oja and colleagues [J. Clinical Monitoring Computing 16, 60 (2000)]. Our modeling work shows that there is an inverse relationship between spectral entropy H and correlation time T of the soma-voltage fluctuations: H inversely proportional to (ln T). In a theoretical analysis we prove that this proportionality becomes exact for an ideal Lorentzian process. These findings suggest that by monitoring the changes in EEG correlation time, it should be possible to track changes in the state of patient consciousness. PMID:11461299

  4. Role of a cumulus parameterization scheme in simulating atmospheric circulation and rainfall in the nine-layer Goddard Laboratory for Atmospheres General Circulation Model

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Chao, Winston C.; Walker, G. K.

    1992-01-01

    The influence of a cumulus convection scheme on the simulated atmospheric circulation and hydrologic cycle is investigated by means of a coarse version of the GCM. Two sets of integrations, each containing an ensemble of three summer simulations, were produced. The ensemble sets of control and experiment simulations are compared and differentially analyzed to determine the influence of a cumulus convection scheme on the simulated circulation and hydrologic cycle. The results show that cumulus parameterization has a very significant influence on the simulation circulation and precipitation. The upper-level condensation heating over the ITCZ is much smaller for the experiment simulations as compared to the control simulations; correspondingly, the Hadley and Walker cells for the control simulations are also weaker and are accompanied by a weaker Ferrel cell in the Southern Hemisphere. Overall, the difference fields show that experiment simulations (without cumulus convection) produce a cooler and less energetic atmosphere.

  5. Implementation of Sub-Cooling of Cryogenic Propellants by Injection of Non-condensing Gas to the Generalized Fluid Systems Simulation Program (GFSSP)

    NASA Technical Reports Server (NTRS)

    Huggett, Daniel J.; Majumdar, Alok

    2013-01-01

    Cryogenic propellants are readily heated when used. This poses a problem for rocket engine efficiency and effective boot-strapping of the engine, as seen in the "hot" LOX (Liquid Oxygen) problem on the S-1 stage of the Saturn vehicle. In order to remedy this issue, cryogenic fluids were found to be sub-cooled by injection of a warm non-condensing gas. Experimental results show that the mechanism behind the sub-cooling is evaporative cooling. It has been shown that a sub-cooled temperature difference of approximately 13 deg F below saturation temperature [1]. The phenomenon of sub-cooling of cryogenic propellants by a non-condensing gas is not readily available with the General Fluid System Simulation Program (GFSSP) [2]. GFSSP is a thermal-fluid program used to analyze a wide variety of systems that are directly impacted by thermodynamics and fluid mechanics. In order to model this phenomenon, additional capabilities had to be added to GFSSP in the form of a FORTRAN coded sub-routine to calculate the temperature of the sub-cooled fluid. Once this was accomplished, the sub-routine was implemented to a GFSSP model that was created to replicate an experiment that was conducted to validate the GFSSP results.

  6. Chondrocyte Deformations as a Function of Tibiofemoral Joint Loading Predicted by a Generalized High-Throughput Pipeline of Multi-Scale Simulations

    PubMed Central

    Sibole, Scott C.; Erdemir, Ahmet

    2012-01-01

    Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro

  7. Using the Flow-3D General Moving Object Model to Simulate Coupled Liquid Slosh - Container Dynamics on the SPHERES Slosh Experiment: Aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Schulman, Richard; Kirk, Daniel; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul

    2013-01-01

    The SPHERES Slosh Experiment (SSE) is a free floating experimental platform developed for the acquisition of long duration liquid slosh data aboard the International Space Station (ISS). The data sets collected will be used to benchmark numerical models to aid in the design of rocket and spacecraft propulsion systems. Utilizing two SPHERES Satellites, the experiment will be moved through different maneuvers designed to induce liquid slosh in the experiment's internal tank. The SSE has a total of twenty-four thrusters to move the experiment. In order to design slosh generating maneuvers, a parametric study with three maneuvers types was conducted using the General Moving Object (GMO) model in Flow-30. The three types of maneuvers are a translation maneuver, a rotation maneuver and a combined rotation translation maneuver. The effectiveness of each maneuver to generate slosh is determined by the deviation of the experiment's trajectory as compared to a dry mass trajectory. To fully capture the effect of liquid re-distribution on experiment trajectory, each thruster is modeled as an independent force point in the Flow-3D simulation. This is accomplished by modifying the total number of independent forces in the GMO model from the standard five to twenty-four. Results demonstrate that the most effective slosh generating maneuvers for all motions occurs when SSE thrusters are producing the highest changes in SSE acceleration. The results also demonstrate that several centimeters of trajectory deviation between the dry and slosh cases occur during the maneuvers; while these deviations seem small, they are measureable by SSE instrumentation.

  8. Subsurface Transport Over Reactive Multiphases (STORM): A general, coupled, nonisothermal multiphase flow, reactive transport, and porous medium alteration simulator, Version 2 user's guide

    SciTech Connect

    DH Bacon; MD White; BP McGrail

    2000-03-07

    The Hanford Site, in southeastern Washington State, has been used extensively to produce nuclear materials for the US strategic defense arsenal by the Department of Energy (DOE) and its predecessors, the US Atomic Energy Commission and the US Energy Research and Development Administration. A large inventory of radioactive and mixed waste has accumulated in 177 buried single- and double shell tanks. Liquid waste recovered from the tanks will be pretreated to separate the low-activity fraction from the high-level and transuranic wastes. Vitrification is the leading option for immobilization of these wastes, expected to produce approximately 550,000 metric tons of Low Activity Waste (LAW) glass. This total tonnage, based on nominal Na{sub 2}O oxide loading of 20% by weight, is destined for disposal in a near-surface facility. Before disposal of the immobilized waste can proceed, the DOE must approve a performance assessment, a document that described the impacts, if any, of the disposal facility on public health and environmental resources. Studies have shown that release rates of radionuclides from the glass waste form by reaction with water determine the impacts of the disposal action more than any other independent parameter. This report describes the latest accomplishments in the development of a computational tool, Subsurface Transport Over Reactive Multiphases (STORM), Version 2, a general, coupled non-isothermal multiphase flow and reactive transport simulator. The underlying mathematics in STORM describe the rate of change of the solute concentrations of pore water in a variably saturated, non-isothermal porous medium, and the alteration of waste forms, packaging materials, backfill, and host rocks.

  9. Simulating influence of QBO phase on planetary waves during a stratospheric warming in a general circulation model of the middle atmosphere

    NASA Astrophysics Data System (ADS)

    Koval, Andrey; Gavrilov, Nikolai; Pogoreltsev, Alexander; Savenkova, Elena

    2016-04-01

    One of the important factors of dynamical interactions between the lower and upper atmosphere is energy and momentum transfer by atmospheric internal gravity waves. For numerical modeling of the general circulation and thermal regime of the middle and upper atmosphere, it is important to take into account accelerations of the mean flow and heating rates produced by dissipating internal waves. The quasi-biennial oscillations (QBOs) of the zonal mean flow at lower latitudes at stratospheric heights can affect the propagation conditions of planetary waves. We perform numerical simulation of global atmospheric circulation for the initial conditions corresponding to the years with westerly and easterly QBO phases. We focus on the changes in amplitudes of stationary planetary waves (SPWs) and traveling normal atmospheric modes (NAMs) in the atmosphere during SSW events for the different QBO phases. For these experiments, we use the global circulation of the middle and upper atmosphere model (MUAM). There is theory of PW waveguide describing atmospheric regions where the background wind and temperature allow the wave propagation. There were introduced the refractive index for PWs and found that strongest planetary wave propagation is in areas of large positive values of this index. Another important PW characteristic is the Eliassen-Palm flux (EP-flux). These characteristics are considered as useful tools for visualizing the PW propagation conditions. Sudden stratospheric warming (SSW) event has significant influence on the formation of the weather anomalous and climate changes in the troposphere. Also, SSW event may affect the dynamical and energy processes in the upper atmosphere. The major SSW events imply significant temperature rises (up to 30 - 40 K) at altitudes 30 - 50 km accompanying with corresponding decreases, or reversals, of climatological eastward zonal winds in the stratosphere.

  10. General regression neural network and Monte Carlo simulation model for survival and growth of Salmonella on raw chicken skin as a function of serotype, temperature and time for use in risk assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A general regression neural network and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, Hadar), temperature (5 to 50C) and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural r...

  11. Green's Function Reaction Dynamics—An Exact and Efficient Way To simulate Intracellular Pattern Formation

    NASA Astrophysics Data System (ADS)

    Sokolowski, Thomas; Bossen, Laurens; Miedema, Thomas; Becker, Nils

    2010-09-01

    Active transport of intracellular cargo on cytoskeletal polymers via ATP-driven motor proteins plays a key role in establishing well-defined spatial patterns of functional intracellular components, which can range from proteins to big organelles like mitochondria. It is the interplay between active transport, diffusive movement in the cytosol and the geometry of the cell and its cytoskeleton that finally determines the distribution of the transported objects. To analyze such phenomena we extend our Green's Function Reaction Dynamics (GFRD) framework to allow for an exact event-driven simulation of active transport on microtubules and interactions with the cell membrane.

  12. The impact of surface dust source exhaustion on the martian dust cycle, dust storms and interannual variability, as simulated by the MarsWRF General Circulation Model

    NASA Astrophysics Data System (ADS)

    Newman, Claire E.; Richardson, Mark I.

    2015-09-01

    Observations of albedo on Mars suggest a largely invariant long-term mean surface dust distribution, but also reveal variations on shorter (seasonal to annual) timescales, particularly associated with major dust storms. We study the impact of finite surface dust availability on the dust cycle in the MarsWRF General Circulation Model (GCM), which uses radiatively active dust with parameterized 'dust devil' and wind stress dust lifting to enable the spontaneous production of dust storms, and tracks budgets of dust lifting, deposition, and total surface dust inventory. We seek a self-consistent, long-term 'steady state' dust cycle for present day Mars, consisting of (a) a surface dust distribution that varies from year to year but is constant longer-term and in balance with current dust redistribution processes, and (b) a fixed set of dust lifting parameters that continue to produce major storms for this distribution of surface dust. We relax the GCM's surface dust inventory toward this steady state using an iterative process, in which dust lifting rate parameters are increased as progressively more surface sites are exhausted of dust. Late in the equilibration process, the GCM exhibits quasi-steady state behavior in which few new surface grid points are exhausted during a 60 year period with constant dust lifting parameters. Complex regional-scale dust redistribution occurs on time-scales from less than seasonal to decadal, and the GCM generates regional to global dust storms with many realistic features. These include merging regional storms, cross-equatorial storms, and the timing and location of several storm types, though very early major storms and large amounts of late storm activity are not reproduced. Surface dust availability in key onset and growth source regions appears vital for 'early' major storms, with replenishment of these regions required before another large storm can occur, whereas 'late' major storms appear primarily dependent on atmospheric

  13. VERSE - Virtual Equivalent Real-time Simulation

    NASA Technical Reports Server (NTRS)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  14. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces

    NASA Astrophysics Data System (ADS)

    Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc

    2016-02-01

    The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.

  15. LOADING SIMULATION PROGRAM C

    EPA Science Inventory

    LSPC is the Loading Simulation Program in C++, a watershed modeling system that includes streamlined Hydrologic Simulation Program Fortran (HSPF) algorithms for simulating hydrology, sediment, and general water quality on land as well as a simplified stream transport model. LSPC ...

  16. The TRIDEC Virtual Tsunami Atlas - customized value-added simulation data products for Tsunami Early Warning generated on compute clusters

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.

    2012-04-01

    -up and utilize the CCE has been implemented by the project Collaborative, Complex, and Critical Decision Processes in Evolving Crises (TRIDEC) funded under the European Union's FP7. TRIDEC focuses on real-time intelligent information management in Earth management. The addressed challenges include the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulations and data fusion tools. Additionally, TRIDEC adopts enhancements of Service Oriented Architecture (SOA) principles in terms of Event Driven Architecture (EDA) design. As a next step the implemented CCE's services to generate derived and customized simulation products are foreseen to be provided via an EDA service for on-demand processing for specific threat-parameters and to accommodate for model improvements.

  17. General Dentist

    MedlinePlus

    ... to your desktop! more... What Is a General Dentist? Article Chapters What Is a General Dentist? General ... Reviewed: January 2012 ?xml:namespace> Related Articles: General Dentists FAGD and MAGD: What Do These Awards Mean? ...

  18. Detection of stress corrosion cracking and general corrosion of mild steel in simulated defense nuclear waste solutions using electrochemical noise analysis

    NASA Astrophysics Data System (ADS)

    Edgemon, G. L.; Danielson, M. J.; Bell, G. E. C.

    1997-06-01

    Underground waste tanks fabricated from mild steel store more than 253 million liters of high level radioactive waste from 50 years of weapons production at the Hanford Site. The probable modes of corrosion failures are reported as nitrate stress corrosion cracking and pitting. In an effort to develop a waste tank corrosion monitoring system, laboratory tests were conducted to characterize electrochemical noise data for both uniform and localized corrosion of mild steel and other materials in simulated waste environments. The simulated waste solutions were primarily composed of ammonium nitrate or sodium nitrate and were held at approximately 97°C. The electrochemical noise of freely corroding specimens was monitored, recorded and analyzed for periods ranging between 10 and 500 h. At the end of each test period, the specimens were examined to correlate electrochemical noise data with corrosion damage. Data characteristic of uniform corrosion and stress corrosion cracking are presented.

  19. Optimization of energy usage in textile finishing operations. Part I. The simulation of batch dyehouse activities with a general purpose computer model

    SciTech Connect

    Beard, J.N. Jr.; Rice, W.T. Jr.

    1980-01-01

    A project to develop a mathematical model capable of simulating the activities in a typical batch dyeing process in the textile industry is described. The model could be used to study the effects of changes in dye-house operations, and to determine effective guidelines for optimal dyehouse performance. The computer model is of a hypothetical dyehouse. The appendices contain a listing of the computer program, sample computer inputs and outputs, and instructions for using the model. (MCW)

  20. The northern wintertime divergence extrema at 200 hPa and MSLP cyclones as simulated in the AMIP integration by the ECMWF general circulation model

    SciTech Connect

    Boyle, J.S. )

    1994-01-01

    Divergence and convergence centers at 200 hPa and mean sea level pressure (MSLP) cyclones are located every 6 hours for a 10-year GCM simulation for the boreal winters from 1980 to 1988. The simulation used the observed monthly mean SST for the decade. Analysis of the frequency, locations, and strengths of these centers and cyclones give insight into the dynamical response of the model to the varying SST. IT is found that (1) the model produces reasonable climatologies of upper-level divergence and MSLP cyclones. (2) The model distribution of anomalies of divergence/convergence centers and MSLP cyclones is consistent with available observations for the 1982-83 and 2986-87 El Nino events. (3) The tropical Indian Ocean is the region of greatest divergence activity and interannual variability in the model. (4) The variability of the divergence centers is greater than that of the convergence centers. (5) Strong divergence centers are chiefly oceanic events in the midlatitudes but are more land based in the tropics, except in the Indian. (6) Locations of divergence/convergence centers can be a useful tool for the intercomparison of global atmospheric simulations.

  1. The RD53 collaboration's SystemVerilog-UVM simulation framework and its general applicability to design of advanced pixel readout chips

    NASA Astrophysics Data System (ADS)

    Marconi, S.; Conti, E.; Placidi, P.; Christiansen, J.; Hemperek, T.

    2014-10-01

    The foreseen Phase 2 pixel upgrades at the LHC have very challenging requirements for the design of hybrid pixel readout chips. A versatile pixel simulation platform is as an essential development tool for the design, verification and optimization of both the system architecture and the pixel chip building blocks (Intellectual Properties, IPs). This work is focused on the implemented simulation and verification environment named VEPIX53, built using the SystemVerilog language and the Universal Verification Methodology (UVM) class library in the framework of the RD53 Collaboration. The environment supports pixel chips at different levels of description: its reusable components feature the generation of different classes of parameterized input hits to the pixel matrix, monitoring of pixel chip inputs and outputs, conformity checks between predicted and actual outputs and collection of statistics on system performance. The environment has been tested performing a study of shared architectures of the trigger latency buffering section of pixel chips. A fully shared architecture and a distributed one have been described at behavioral level and simulated; the resulting memory occupancy statistics and hit loss rates have subsequently been compared.

  2. Simulating carbon dioxide exchange rates of deciduous tree species: evidence for a general pattern in biochemical changes and water stress response

    PubMed Central

    Reynolds, Robert F.; Bauerle, William L.; Wang, Ying

    2009-01-01

    Background and Aims Deciduous trees have a seasonal carbon dioxide exchange pattern that is attributed to changes in leaf biochemical properties. However, it is not known if the pattern in leaf biochemical properties – maximum Rubisco carboxylation (Vcmax) and electron transport (Jmax) – differ between species. This study explored whether a general pattern of changes in Vcmax, Jmax, and a standardized soil moisture response accounted for carbon dioxide exchange of deciduous trees throughout the growing season. Methods The model MAESTRA was used to examine Vcmax and Jmax of leaves of five deciduous trees, Acer rubrum ‘Summer Red’, Betula nigra, Quercus nuttallii, Quercus phellos and Paulownia elongata, and their response to soil moisture. MAESTRA was parameterized using data from in situ measurements on organs. Linking the changes in biochemical properties of leaves to the whole tree, MAESTRA integrated the general pattern in Vcmax and Jmax from gas exchange parameters of leaves with a standardized soil moisture response to describe carbon dioxide exchange throughout the growing season. The model estimates were tested against measurements made on the five species under both irrigated and water-stressed conditions. Key Results Measurements and modelling demonstrate that the seasonal pattern of biochemical activity in leaves and soil moisture response can be parameterized with straightforward general relationships. Over the course of the season, differences in carbon exchange between measured and modelled values were within 6–12 % under well-watered conditions and 2–25 % under water stress conditions. Hence, a generalized seasonal pattern in the leaf-level physiological change of Vcmax and Jmax, and a standardized response to soil moisture was sufficient to parameterize carbon dioxide exchange for large-scale evaluations. Conclusions Simplification in parameterization of the seasonal pattern of leaf biochemical activity and soil moisture response of

  3. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We

  4. Simulation of cell rolling and adhesion on surfaces in shear flow: general results and analysis of selectin-mediated neutrophil adhesion.

    PubMed Central

    Hammer, D A; Apte, S M

    1992-01-01

    The receptor-mediated adhesion of cells to ligand-coated surfaces in viscous shear flow is an important step in many physiological processes, such as the neutrophil-mediated inflammatory response, lymphocyte homing, and tumor cell metastasis. This paper describes a calculational method which simulates the interaction of a single cell with a ligand-coated surface under flow. The cell is idealized as a microvilli-coated hard sphere covered with adhesive springs. The distribution of microvilli on the cell surface, the distribution of receptors on microvilli tips, and the forward and reverse reaction between receptor and ligand are all simulated using random number sampling of appropriate probability functions. The velocity of the cell at each time step in the simulation results from a balance of hydrodynamic, colloidal and bonding forces; the bonding force is derived by summing the individual contributions of each receptor-ligand tether. The model can simulate the effect of many parameters on adhesion, such as the number of receptors on microvilli tips, the density of ligand, the rates of reaction between receptor and ligand, the stiffness of the resulting receptor-ligand springs, the response of springs to strain, and the magnitude of the bulk hydrodynamic stresses. The model can successfully recreate the entire range of expected and observed adhesive phenomena, from completely unencumbered motion, to rolling, to transient attachment, to firm adhesion. Also, the method can generate meaningful statistical measures of adhesion, including the mean and variance in velocity, rate constants for cell attachment and detachment, and the frequency of adhesion. We find a critical modulating parameter of adhesion is the fractional spring slippage, which relates the strain of a bond to its rate of breakage; the higher the slippage, the faster the breakage for the same strain. Our analysis of neutrophil adhesive behavior on selectin-coated (CD62-coated) surfaces in viscous shear

  5. Towards a general method for estimating the unbalanced magnetic pull in mixed eccentricities motion including sufficiently large eccentricities in a hydropower generator and their validation against EM simulations

    NASA Astrophysics Data System (ADS)

    Calleecharan, Yogeshwarsing; Jauregui, Ricardo; Aidanpää, Jan-Olov

    2013-08-01

    Electromagnetic (EM) analysis of hydropower generators is common practice but rotor whirling is little studied. This paper suggests a novel semi-analytical method for estimating the steady state unbalanced magnetic pull (UMP) when the rotor centre is undergoing mixed eccentricities motion. The ability to estimate the UMP for mixed eccentricities motion in finite element method (FEM)-based modelling software packages is rare. The proposed methodology in its formulation takes advantage of the fact that a purely dynamic eccentricity motion including non-synchronous whirling and a purely static eccentricity motion can be more amenable to implement in existing FEM-based EM modelling software products for UMP estimation. After these initial separate UMP results are obtained, the proposed method can be applied for virtually any mixed eccentricities motion cases up to sufficiently large eccentricities for quick analysis instead of running the mixed eccentricities simulations directly in a FEM-based software package. Good agreement between the UMP from the actual EM mixed eccentricities motion simulations in a commercial FEM-based software package and the UMP estimations by the novel method is made for a wide range of eccentricities that may commonly occur in practice. A modified feature selective validation (FSV) method, the FSV-UPC, is applied to assess the similarities and the differences in the UMP computations.

  6. A hybrid formalism combining fluctuating hydrodynamics and generalized Langevin dynamics for the simulation of nanoparticle thermal motion in an incompressible fluid medium

    PubMed Central

    Uma, B.; Eckmann, D.M.; Ayyaswamy, P.S.; Radhakrishnan, R.

    2012-01-01

    A novel hybrid scheme based on Markovian fluctuating hydrodynamics of the fluid and a non-Markovian Langevin dynamics with the Ornstein-Uhlenbeck noise perturbing the translational and rotational equations of motion of the nanoparticle is employed to study the thermal motion of a nanoparticle in an incompressible Newtonian fluid medium. A direct numerical simulation adopting an arbitrary Lagrangian-Eulerian (ALE) based finite element method (FEM) is employed in simulating the thermal motion of a particle suspended in the fluid confined in a cylindrical vessel. The results for thermal equilibrium between the particle and the fluid are validated by comparing the numerically predicted temperature of the nanoparticle with that obtained from the equipartition theorem. The nature of the hydrodynamic interactions is verified by comparing the velocity autocorrelation function (VACF) and mean squared displacement (MSD) with well-known analytical results. For nanoparticle motion in an incompressible fluid, the fluctuating hydrodynamics approach resolves the hydrodynamics correctly but does not impose the correct equipartition of energy based on the nanoparticle mass because of the added mass of the displaced fluid. In contrast, the Langevin approach with an appropriate memory is able to show the correct equipartition of energy, but not the correct short- and long-time hydrodynamic correlations. Using our hybrid approach presented here, we show for the first time, that we can simultaneously satisfy the equipartition theorem and the (short- and long-time) hydrodynamic correlations. In effect, this results in a thermostat that also simultaneously preserves the true hydrodynamic correlations. The significance of this result is that our new algorithm provides a robust computational approach to explore nanoparticle motion in arbitrary geometries and flow fields, while simultaneously enabling us to study carrier adhesion mediated by biological reactions (receptor

  7. General-Aviation Control Loader

    NASA Technical Reports Server (NTRS)

    Baltrus, Daniel W.; Albang, Leroy F.; Hallinger, John A.; Burge, W. Wayne

    1988-01-01

    Artificial-feel system designed for general-aviation flight simulators. New system developed to replace directly lateral and longitudinal controls in general-aviation cockpit for use in flight-simulation research. Using peaucellier's cell to convert linear motion to rotary motion, control-loading system provides realistic control-force feedback to cockpit wheel and column controls.

  8. Simulations of the HDO and H2O-18 atmospheric cycles using the NASA GISS general circulation model - Sensitivity experiments for present-day conditions

    NASA Technical Reports Server (NTRS)

    Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.

    1991-01-01

    Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.

  9. A Steady State and Quasi-Steady Interface Between the Generalized Fluid System Simulation Program and the SINDA/G Thermal Analysis Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Majumdar, Alok; Tiller, Bruce

    2001-01-01

    A general purpose, one dimensional fluid flow code is currently being interfaced with the thermal analysis program SINDA/G. The flow code, GFSSP, is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development is conducted in multiple phases. This paper describes the first phase of the interface which allows for steady and quasisteady (unsteady solid, steady fluid) conjugate heat transfer modeling.

  10. Self-Consistent Theory of Elastic Properties of Strongly Anharmonic Crystals I:. General Treatment and Comparison with Computer Simulations and Experiment for Fcc Crystals

    NASA Astrophysics Data System (ADS)

    Zubov, V. I.; Sanchez, J. F.; Tretiakov, N. P.; Yusef, A. E.

    Based on the correlative method of an unsymmetrized self-consistent field,16-23 we have derived expressions for elastic constant tensors of strongly anharmonic crystals of cubic symmetry. Each isothermal elastic constant consists of four terms. The first one is the zeroth approximation containing the main anharmonicity (up to the fourth order). The second term is the quantum correction. It is important at temperatures below the De-bye characteristic temperature. Finally, the third and fourth terms are the perturbation theory corrections which take into account the influence of the correlations in atomic displacements from the lattice points and that of the high-order anharmonicity respectively. These corrections appear to be small up to the melting temperatures. It is sufficient for a personal computer to perform all our calculations with just a little computer time. A comparison with certain Monte Carlo simulations and with experimental data for Ar and Kr is made. For the most part, our results are between. The quasi-harmonic approximation fails at high temperatures, confirming once again the crucial role of strong anharmonicity.

  11. Surgical Simulation

    PubMed Central

    Sutherland, Leanne M.; Middleton, Philippa F.; Anthony, Adrian; Hamdorf, Jeffrey; Cregan, Patrick; Scott, David; Maddern, Guy J.

    2006-01-01

    Objective: To evaluate the effectiveness of surgical simulation compared with other methods of surgical training. Summary Background Data: Surgical simulation (with or without computers) is attractive because it avoids the use of patients for skills practice and provides relevant technical training for trainees before they operate on humans. Methods: Studies were identified through searches of MEDLINE, EMBASE, the Cochrane Library, and other databases until April 2005. Included studies must have been randomized controlled trials (RCTs) assessing any training technique using at least some elements of surgical simulation, which reported measures of surgical task performance. Results: Thirty RCTs with 760 participants were able to be included, although the quality of the RCTs was often poor. Computer simulation generally showed better results than no training at all (and than physical trainer/model training in one RCT), but was not convincingly superior to standard training (such as surgical drills) or video simulation (particularly when assessed by operative performance). Video simulation did not show consistently better results than groups with no training at all, and there were not enough data to determine if video simulation was better than standard training or the use of models. Model simulation may have been better than standard training, and cadaver training may have been better than model training. Conclusions: While there may be compelling reasons to reduce reliance on patients, cadavers, and animals for surgical training, none of the methods of simulated training has yet been shown to be better than other forms of surgical training. PMID:16495690

  12. Future changes and uncertainties in Asian precipitation simulated by multiphysics and multi-sea surface temperature ensemble experiments with high-resolution Meteorological Research Institute atmospheric general circulation models (MRI-AGCMs)

    NASA Astrophysics Data System (ADS)

    Endo, Hirokazu; Kitoh, Akio; Ose, Tomoaki; Mizuta, Ryo; Kusunoki, Shoji

    2012-08-01

    This study focuses on projecting future changes in mean and extreme precipitation in Asia, and discusses their uncertainties. Time-slice experiments using a 20-km-mesh atmospheric general circulation (AGCM) were performed both in the present-day (1979-2003) and the future (2075-2099). To assess the uncertainty of the projections, 12 ensemble projections (i.e., combination of 3 different cumulus schemes and 4 different sea surface temperature (SST) change patterns) were conducted using 60-km-mesh AGCMs. For the present-day simulations, the models successfully reproduced the pattern and amount of mean and extreme precipitation, although the model with the Arakawa-Schubert (AS) cumulus scheme underestimated the amount of extreme precipitation. For the future climate simulations, in South Asia and Southeast Asia, mean and extreme precipitation generally increase, but their changes show marked differences among the projections, suggesting some uncertainty in their changes over these regions. In East Asia, northwestern China and Bangladesh, in contrast, mean and extreme precipitation show consistent increases among the projections, suggesting their increases are reliable for this model framework. Further investigation by analysis of variance (ANOVA) revealed that the uncertainty in the precipitation changes in South Asia and Southeast Asia are derived mainly from differences in the cumulus schemes, with an exception in the Maritime Continent where the uncertainty originates mainly from the differences in the SST pattern.

  13. Numerical simulation on slabs dislocation of Zipingpu concrete faced rockfill dam during the Wenchuan earthquake based on a generalized plasticity model.

    PubMed

    Xu, Bin; Zhou, Yang; Zou, Degao

    2014-01-01

    After the Wenchuan earthquake in 2008, the Zipingpu concrete faced rockfill dam (CFRD) was found slabs dislocation between different stages slabs and the maximum value reached 17 cm. This is a new damage pattern and did not occur in previous seismic damage investigation. Slabs dislocation will affect the seepage control system of the CFRD gravely and even the safety of the dam. Therefore, investigations of the slabs dislocation's mechanism and development might be meaningful to the engineering design of the CFRD. In this study, based on the previous studies by the authors, the slabs dislocation phenomenon of the Zipingpu CFRD was investigated. The procedure and constitutive model of materials used for finite element analysis are consistent. The water elevation, the angel, and the strength of the construction joints were among major variables of investigation. The results indicated that the finite element procedure based on a modified generalized plasticity model and a perfect elastoplastic interface model can be used to evaluate the dislocation damage of face slabs of concrete faced rockfill dam during earthquake. The effects of the water elevation, the angel, and the strength of the construction joints are issues of major design concern under seismic loading. PMID:25013857

  14. Hybrid guiding-centre/full-orbit simulations in non-axisymmetric magnetic geometry exploiting general criterion for guiding-centre accuracy

    NASA Astrophysics Data System (ADS)

    Pfefferlé, D.; Graves, J. P.; Cooper, W. A.

    2015-05-01

    To identify under what conditions guiding-centre or full-orbit tracing should be used, an estimation of the spatial variation of the magnetic field is proposed, not only taking into account gradient and curvature terms but also parallel currents and the local shearing of field-lines. The criterion is derived for general three-dimensional magnetic equilibria including stellarator plasmas. Details are provided on how to implement it in cylindrical coordinates and in flux coordinates that rely on the geometric toroidal angle. A means of switching between guiding-centre and full-orbit equations at first order in Larmor radius with minimal discrepancy is shown. Techniques are applied to a MAST (mega amp spherical tokamak) helical core equilibrium in which the inner kinked flux-surfaces are tightly compressed against the outer axisymmetric mantle and where the parallel current peaks at the nearly rational surface. This is put in relation with the simpler situation B(x, y, z) = B0[sin(kx)ey + cos(kx)ez], for which full orbits and lowest order drifts are obtained analytically. In the kinked equilibrium, the full orbits of NBI fast ions are solved numerically and shown to follow helical drift surfaces. This result partially explains the off-axis redistribution of neutral beam injection fast particles in the presence of MAST long-lived modes (LLM).

  15. Numerical Simulation on Slabs Dislocation of Zipingpu Concrete Faced Rockfill Dam during the Wenchuan Earthquake Based on a Generalized Plasticity Model

    PubMed Central

    Xu, Bin; Zou, Degao

    2014-01-01

    After the Wenchuan earthquake in 2008, the Zipingpu concrete faced rockfill dam (CFRD) was found slabs dislocation between different stages slabs and the maximum value reached 17 cm. This is a new damage pattern and did not occur in previous seismic damage investigation. Slabs dislocation will affect the seepage control system of the CFRD gravely and even the safety of the dam. Therefore, investigations of the slabs dislocation's mechanism and development might be meaningful to the engineering design of the CFRD. In this study, based on the previous studies by the authors, the slabs dislocation phenomenon of the Zipingpu CFRD was investigated. The procedure and constitutive model of materials used for finite element analysis are consistent. The water elevation, the angel, and the strength of the construction joints were among major variables of investigation. The results indicated that the finite element procedure based on a modified generalized plasticity model and a perfect elastoplastic interface model can be used to evaluate the dislocation damage of face slabs of concrete faced rockfill dam during earthquake. The effects of the water elevation, the angel, and the strength of the construction joints are issues of major design concern under seismic loading. PMID:25013857

  16. Fast spot-based multiscale simulations of granular drainage

    SciTech Connect

    Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.

    2009-05-22

    We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.

  17. General regression neural network and monte carlo simulation model for survival and growth of salmonella on raw chicken skin as a function of serotype, temperature, and time for use in risk assessment.

    PubMed

    Oscar, Thomas P

    2009-10-01

    A general regression neural network (GRNN) and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, and Hadar), temperature (5 to 50 degrees C), and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural resistance to antibiotics were used to investigate and model survival and growth from a low initial dose (<1 log) on raw chicken skin. Computer spreadsheet and spreadsheet add-in programs were used to develop and simulate a GRNN model. Model performance was evaluated by determining the percentage of residuals in an acceptable prediction zone from -1 log (fail-safe) to 0.5 log (fail-dangerous). The GRNN model had an acceptable prediction rate of 92% for dependent data (n = 464) and 89% for independent data (n = 116), which exceeded the performance criterion for model validation of 70% acceptable predictions. Relative contributions of independent variables were 16.8% for serotype, 48.3% for temperature, and 34.9% for time. Differences among serotypes were observed, with Kentucky exhibiting less growth than Typhimurium and Hadar, which had similar growth levels. Temperature abuse scenarios were simulated to demonstrate how the model can be integrated with risk assessment, and the most common output distribution obtained was Pearson5. This study demonstrated that it is important to include serotype as an independent variable in predictive models for Salmonella. Had a cocktail of serotypes Typhimurium, Kentucky, and Hadar been used for model development, the GRNN model would have provided overly fail-safe predictions of Salmonella growth on raw chicken skin contaminated with serotype Kentucky. Thus, by developing the GRNN model with individual strains and then modeling growth as a function of serotype prevalence, more accurate predictions were obtained. PMID:19833030

  18. Surgeon General

    MedlinePlus

    ... the Conversation → Step it Up! Help Make Our Communities Walkable YouTube embedded video: https://www.youtube-nocookie. ... Surgeon General Vivek H. Murthy to make our communities more walkable. Watch the video . See All Videos → ...

  19. General anesthesia

    MedlinePlus

    General anesthesia is treatment with certain medicines that puts you into a deep sleep so you do not feel ... doctor called an anesthesiologist will give you the anesthesia. Sometimes, a certified and registered nurse anesthetist will ...

  20. A general simulation model for Stirling cycles

    SciTech Connect

    Schulz, S.; Schwendig, F.

    1996-01-01

    A mathematical model for the calculation of the Stirling cycle and of similar processes is presented. The model comprises a method to reproduce schematically any kind of process configuration, including free piston engines. The differential balance equations describing the process are solved by a stable integration algorithm. Heat transfer and pressure loss are calculated by using new correlations, which consider the special conditions of the periodic compression/expansion respectively of the oscillating flow. A comparison between experimental data achieved by means of a test apparatus and calculated data shows a good agreement.

  1. Research through simulation. [simulators and research applications at Langley

    NASA Technical Reports Server (NTRS)

    Copeland, J. L. (Compiler)

    1982-01-01

    The design of the computer operating system at Langley Research Center allows for concurrent support of time-critical simulations and background analytical computing on the same machine. Signal path interconnections between computing hardware and flight simulation hardware is provided to allow up to six simulation programs to be in operation at one time. Capabilities and research applications are discussed for the: (1) differential maneuvering simulator; (2) visual motion simulator; (3) terminal configured vehicle simulator; (4) general aviation aircraft simulator; (5) general purpose fixed based simulator; (6) transport simulator; (7) digital fly by wire simulator; (8) general purpose fighter simulator; and (9) the roll-up cockpit. The visual landing display system and graphics display system are described and their simulator support applications are listed.

  2. The complete general secretory pathway in gram-negative bacteria.

    PubMed Central

    Pugsley, A P

    1993-01-01

    The unifying feature of all proteins that are transported out of the cytoplasm of gram-negative bacteria by the general secretory pathway (GSP) is the presence of a long stretch of predominantly hydrophobic amino acids, the signal sequence. The interaction between signal sequence-bearing proteins and the cytoplasmic membrane may be a spontaneous event driven by the electrochemical energy potential across the cytoplasmic membrane, leading to membrane integration. The translocation of large, hydrophilic polypeptide segments to the periplasmic side of this membrane almost always requires at least six different proteins encoded by the sec genes and is dependent on both ATP hydrolysis and the electrochemical energy potential. Signal peptidases process precursors with a single, amino-terminal signal sequence, allowing them to be released into the periplasm, where they may remain or whence they may be inserted into the outer membrane. Selected proteins may also be transported across this membrane for assembly into cell surface appendages or for release into the extracellular medium. Many bacteria secrete a variety of structurally different proteins by a common pathway, referred to here as the main terminal branch of the GSP. This recently discovered branch pathway comprises at least 14 gene products. Other, simpler terminal branches of the GSP are also used by gram-negative bacteria to secrete a more limited range of extracellular proteins. PMID:8096622

  3. Generalized Parabolas

    ERIC Educational Resources Information Center

    Joseph, Dan; Hartman, Gregory; Gibson, Caleb

    2011-01-01

    In this article we explore the consequences of modifying the common definition of a parabola by considering the locus of all points equidistant from a focus and (not necessarily linear) directrix. The resulting derived curves, which we call "generalized parabolas," are often quite beautiful and possess many interesting properties. We show that…

  4. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  5. Nonlocal General Relativity

    NASA Astrophysics Data System (ADS)

    Mashhoon, Bahram

    2014-12-01

    A brief account of the present status of the recent nonlocal generalization of Einstein's theory of gravitation is presented. The main physical assumptions that underlie this theory are described. We clarify the physical meaning and significance of Weitzenbock's torsion and emphasize its intimate relationship with the gravitational field, characterized by the Riemannian curvature of spacetime. In this theory, nonlocality can simulate dark matter; in fact, in the Newtonian regime, we recover the phenomenological Tohline-Kuhn approach to modified gravity. To account for the observational data regarding dark matter, nonlocality is associated with a characteristic length scale of order 1 kpc. The confrontation of nonlocal gravity with observation is briefly discussed.

  6. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  7. Multiple processor accelerator for logic simulation

    SciTech Connect

    Catlin, G.M.

    1989-10-03

    This patent describes a computer system coupled to a plurality of users for implementing an event driven algorithm of each of the users. It comprises: a master processor coupled to the users for providing overall control of the computer system and for executing the event driven algorithm of each of the users, wherein the master processor further includes a master memory; a unidirectional ring bus coupled to the master processor; a plurality of processor modules coupled to the unidirectional ring bus, wherein the unidirectional ring bus transfers data among the processor modules and the master processor.

  8. I. Cognitive and instructional factors relating to students' development of personal models of chemical systems in the general chemistry laboratory II. Solvation in supercritical carbon dioxide/ethanol mixtures studied by molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Anthony, Seth

    Part I. Students' participation in inquiry-based chemistry laboratory curricula, and, in particular, engagement with key thinking processes in conjunction with these experiences, is linked with success at the difficult task of "transfer"---applying their knowledge in new contexts to solve unfamiliar types of problems. We investigate factors related to classroom experiences, student metacognition, and instructor feedback that may affect students' engagement in key aspects of the Model-Observe-Reflect-Explain (MORE) laboratory curriculum - production of written molecular-level models of chemical systems, describing changes to those models, and supporting those changes with reference to experimental evidence---and related behaviors. Participation in introductory activities that emphasize reviewing and critiquing of sample models and peers' models are associated with improvement in several of these key aspects. When students' self-assessments of the quality of aspects of their models are solicited, students are generally overconfident in the quality of their models, but these self-ratings are also sensitive to the strictness of grades assigned by their instructor. Furthermore, students who produce higher-quality models are also more accurate in their self-assessments, suggesting the importance of self-evaluation as part of the model-writing process. While the written feedback delivered by instructors did not have significant impacts on student model quality or self-assessments, students' resubmissions of models were significantly improved when students received "reflective" feedback prompting them to self-evaluate the quality of their models. Analysis of several case studies indicates that the content and extent of molecular-level ideas expressed in students' models are linked with the depth of discussion and content of discussion that occurred during the laboratory period, with ideas developed or personally committed to by students during the laboratory period being

  9. A generalized gyrokinetic Poisson solver

    SciTech Connect

    Lin, Z.; Lee, W.W.

    1995-03-01

    A generalized gyrokinetic Poisson solver has been developed, which employs local operations in the configuration space to compute the polarization density response. The new technique is based on the actual physical process of gyrophase-averaging. It is useful for nonlocal simulations using general geometry equilibrium. Since it utilizes local operations rather than the global ones such as FFT, the new method is most amenable to massively parallel algorithms.

  10. Simulation of the 1986-1987 El Niño and 1988 La Niña events with a free surface tropical Pacific Ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rong-Hua; Endoh, Masahiro

    1994-04-01

    Observed atmospheric forcing fields over the period 1984-1989 force a free surface tropical Pacific Ocean general circulation model. Numerical simulation of the 1986-1987 El Niño and 1988 La Niña events is presented in the paper. Some quantitative comparisons between model time series and corresponding observations of sea level, and upper ocean current and temperature are made to verify the model performance. Diagnostic analyses of heat balance and available energy budget are given as well. The space-time evolution of various model variables demonstrates that the model produces interannual variations with reasonable success. Beginning in mid-1986, westerly wind over the western equatorial Pacific drives strong eastward surface currents which accomplish the massive transfer of warm surface water. The strong westerly wind in late 1986 excites the pronounced equatorial Kelvin waves, which propagate eastward toward the eastern and coastal Pacific where they depress the thermocline and raise sea level twice, and increase sea surface temperature. The eastern Pacific warming occurs primarily from the diminished cooling contribution of vertical advection, whereas in the central Pacific, eastward advection by anomalous zonal flows is the principal mechanism. The El Niño conditions in the eastern Pacific disappear in mid-1987, whereas they remain in the central and western Pacific until early 1988. Subsequently, the tropical Pacific Ocean rebounds to significant La Niña conditions. Available energy (AE) has a good phase relationship with respect to other variables characterized by warm and cold conditions. AE is anomalously high prior to a warm event, accompanying conversion from kinetic energy (KE) to available potential energy (APE). During the development of El Niño, although relaxation of trade wind reduces input of wind energy, the appearance of westerly wind in the western Pacific leads to a sharp increase in KE. This excites excessive conversion from APE to KE

  11. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online. PMID:24729671

  12. Generalized gamma frailty model.

    PubMed

    Balakrishnan, N; Peng, Yingwei

    2006-08-30

    In this article, we present a frailty model using the generalized gamma distribution as the frailty distribution. It is a power generalization of the popular gamma frailty model. It also includes other frailty models such as the lognormal and Weibull frailty models as special cases. The flexibility of this frailty distribution makes it possible to detect a complex frailty distribution structure which may otherwise be missed. Due to the intractable integrals in the likelihood function and its derivatives, we propose to approximate the integrals either by Monte Carlo simulation or by a quadrature method and then determine the maximum likelihood estimates of the parameters in the model. We explore the properties of the proposed frailty model and the computation method through a simulation study. The study shows that the proposed model can potentially reduce errors in the estimation, and that it provides a viable alternative for correlated data. The merits of proposed model are demonstrated in analysing the effects of sublingual nitroglycerin and oral isosorbide dinitrate on angina pectoris of coronary heart disease patients based on the data set in Danahy et al. (sustained hemodynamic and antianginal effect of high dose oral isosorbide dinitrate. Circulation 1977; 55:381-387). PMID:16220516

  13. Control Theory and Statistical Generalizations.

    ERIC Educational Resources Information Center

    Powers, William T.

    1990-01-01

    Contrasts modeling methods in control theory to the methods of statistical generalizations in empirical studies of human or animal behavior. Presents a computer simulation that predicts behavior based on variables (effort and rewards) determined by the invariable (desired reward). Argues that control theory methods better reflect relationships to…

  14. Generalized Causal Mediation Analysis

    PubMed Central

    Albert, Jeffrey M.; Nelson, Suchitra

    2010-01-01

    Summary The goal of mediation analysis is to assess direct and indirect effects of a treatment or exposure on an outcome. More generally, we may be interested in the context of a causal model as characterized by a directed acyclic graph (DAG), where mediation via a specific path from exposure to outcome may involve an arbitrary number of links (or ‘stages’). Methods for estimating mediation (or pathway) effects are available for a continuous outcome and a continuous mediator related via a linear model, while for a categorical outcome or categorical mediator, methods are usually limited to two-stage mediation. We present a method applicable to multiple stages of mediation and mixed variable types using generalized linear models. We define pathway effects using a potential outcomes framework and present a general formula that provides the effect of exposure through any specified pathway. Some pathway effects are nonidentifiable and their estimation requires an assumption regarding the correlation between counterfactuals. We provide a sensitivity analysis to assess of the impact of this assumption. Confidence intervals for pathway effect estimates are obtained via a bootstrap method. The method is applied to a cohort study of dental caries in very low birth weight adolescents. A simulation study demonstrates low bias of pathway effect estimators and close-to-nominal coverage rates of confidence intervals. We also find low sensitivity to the counterfactual correlation in most scenarios. PMID:21306353

  15. General Relativity and Gravitation

    NASA Astrophysics Data System (ADS)

    Ashtekar, Abhay; Berger, Beverly; Isenberg, James; MacCallum, Malcolm

    2015-07-01

    Part I. Einstein's Triumph: 1. 100 years of general relativity George F. R. Ellis; 2. Was Einstein right? Clifford M. Will; 3. Cosmology David Wands, Misao Sasaki, Eiichiro Komatsu, Roy Maartens and Malcolm A. H. MacCallum; 4. Relativistic astrophysics Peter Schneider, Ramesh Narayan, Jeffrey E. McClintock, Peter Mészáros and Martin J. Rees; Part II. New Window on the Universe: 5. Receiving gravitational waves Beverly K. Berger, Karsten Danzmann, Gabriela Gonzalez, Andrea Lommen, Guido Mueller, Albrecht Rüdiger and William Joseph Weber; 6. Sources of gravitational waves. Theory and observations Alessandra Buonanno and B. S. Sathyaprakash; Part III. Gravity is Geometry, After All: 7. Probing strong field gravity through numerical simulations Frans Pretorius, Matthew W. Choptuik and Luis Lehner; 8. The initial value problem of general relativity and its implications Gregory J. Galloway, Pengzi Miao and Richard Schoen; 9. Global behavior of solutions to Einstein's equations Stefanos Aretakis, James Isenberg, Vincent Moncrief and Igor Rodnianski; Part IV. Beyond Einstein: 10. Quantum fields in curved space-times Stefan Hollands and Robert M. Wald; 11. From general relativity to quantum gravity Abhay Ashtekar, Martin Reuter and Carlo Rovelli; 12. Quantum gravity via unification Henriette Elvang and Gary T. Horowitz.

  16. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations. PMID:25602775

  17. Piezoelectric MEMS switch to activate event-driven wireless sensor nodes

    NASA Astrophysics Data System (ADS)

    Nogami, H.; Kobayashi, T.; Okada, H.; Makimoto, N.; Maeda, R.; Itoh, T.

    2013-09-01

    We have developed piezoelectric microelectromechanical systems (MEMS) switches and applied them to ultra-low power wireless sensor nodes, to monitor the health condition of chickens. The piezoelectric switches have ‘S’-shaped piezoelectric cantilevers with a proof mass. Since the resonant frequency of the piezoelectric switches is around 24 Hz, we have utilized their superharmonic resonance to detect chicken movements as low as 5-15 Hz. When the vibration frequency is 4, 6 and 12 Hz, the piezoelectric switches vibrate at 0.5 m s-2 and generate 3-5 mV output voltages with superharmonic resonance. In order to detect such small piezoelectric output voltages, we employ comparator circuits that can be driven at low voltages, which can set the threshold voltage (Vth) from 1 to 31 mV with a 1 mV increment. When we set Vth at 4 mV, the output voltages of the piezoelectric MEMS switches vibrate below 15 Hz with amplitudes above 0.3 m s-2 and turn on the comparator circuits. Similarly, by setting Vth at 5 mV, the output voltages turn on the comparator circuits with vibrations above 0.4 m s-2. Furthermore, setting Vth at 10 mV causes vibrations above 0.5 m s-2 that turn on the comparator circuits. These results suggest that we can select small or fast chicken movements to utilize piezoelectric MEMS switches with comparator circuits.

  18. Ultra-Low Power Event-Driven Wireless Sensor Node Using Piezoelectric Accelerometer for Health Monitoring

    NASA Astrophysics Data System (ADS)

    Okada, Hironao; Kobayashi, Takeshi; Masuda, Takashi; Itoh, Toshihiro

    2009-07-01

    We describe a low power consumption wireless sensor node designed for monitoring the conditions of animals, especially of chickens. The node detects variations in 24-h behavior patterns by acquiring the number of the movement of an animal whose acceleration exceeds a threshold measured in per unit time. Wireless sensor nodes when operated intermittently are likely to miss necessary data during their sleep mode state and waste the power in the case of acquiring useless data. We design the node worked only when required acceleration is detected using a piezoelectric accelerometer and a comparator for wake-up source of micro controller unit.

  19. An Asynchronous Neuromorphic Event-Driven Visual Part-Based Shape Tracking.

    PubMed

    Reverter Valeiras, David; Lagorce, Xavier; Clady, Xavier; Bartolozzi, Chiara; Ieng, Sio-Hoi; Benosman, Ryad

    2015-12-01

    Object tracking is an important step in many artificial vision tasks. The current state-of-the-art implementations remain too computationally demanding for the problem to be solved in real time with high dynamics. This paper presents a novel real-time method for visual part-based tracking of complex objects from the output of an asynchronous event-based camera. This paper extends the pictorial structures model introduced by Fischler and Elschlager 40 years ago and introduces a new formulation of the problem, allowing the dynamic processing of visual input in real time at high temporal resolution using a conventional PC. It relies on the concept of representing an object as a set of basic elements linked by springs. These basic elements consist of simple trackers capable of successfully tracking a target with an ellipse-like shape at several kilohertz on a conventional computer. For each incoming event, the method updates the elastic connections established between the trackers and guarantees a desired geometric structure corresponding to the tracked object in real time. This introduces a high temporal elasticity to adapt to projective deformations of the tracked object in the focal plane. The elastic energy of this virtual mechanical system provides a quality criterion for tracking and can be used to determine whether the measured deformations are caused by the perspective projection of the perceived object or by occlusions. Experiments on real-world data show the robustness of the method in the context of dynamic face tracking. PMID:25794399

  20. Event-Driven Messaging for Offline Data Quality Monitoring at ATLAS

    NASA Astrophysics Data System (ADS)

    Onyisi, Peter

    2015-12-01

    During LHC Run 1, the information flow through the offline data quality monitoring in ATLAS relied heavily on chains of processes polling each other's outputs for handshaking purposes. This resulted in a fragile architecture with many possible points of failure and an inability to monitor the overall state of the distributed system. We report on the status of a project undertaken during the LHC shutdown to replace the ad hoc synchronization methods with a uniform message queue system. This enables the use of standard protocols to connect processes on multiple hosts; reliable transmission of messages between possibly unreliable programs; easy monitoring of the information flow; and the removal of inefficient polling-based communication.

  1. Event-Driven On-Board Software Using Priority-Based Communications Protocols

    NASA Astrophysics Data System (ADS)

    Fowell, S.; Ward, R.; Plummer, C.

    This paper describes current projects being performed by SciSys in the area of the use of software agents, built using CORBA middleware and SOIF communications protocols, to improve operations within autonomous satellite/ground systems. These concepts have been developed and demonstrated in a series of experiments variously funded by ESA's Technology Flight Opportunity Initiative (TFO) and Leading Edge Technology for SMEs (LET-SME), and the British National Space Centre's (BNSC) National Technology Programme. In [1] it is proposed that on-board software should evolve to one that uses an architecture of loosely -coupled software agents, integrated using minimum Real- Time CORBA ORBs such as the SciSys microORB. Building on that, this paper considers the requirements such an architecture and implementation place on the underlying communication protocols (software and hardware) and how these may be met by the emerging CCSDS SOIF recommendations. 2. TRENDS AND ISSUES 2.

  2. An Event Driven Hybrid Identity Management Approach to Privacy Enhanced e-Health

    PubMed Central

    Sánchez-Guerrero, Rosa; Almenárez, Florina; Díaz-Sánchez, Daniel; Marín, Andrés; Arias, Patricia; Sanvido, Fabio

    2012-01-01

    Credential-based authorization offers interesting advantages for ubiquitous scenarios involving limited devices such as sensors and personal mobile equipment: the verification can be done locally; it offers a more reduced computational cost than its competitors for issuing, storing, and verification; and it naturally supports rights delegation. The main drawback is the revocation of rights. Revocation requires handling potentially large revocation lists, or using protocols to check the revocation status, bringing extra communication costs not acceptable for sensors and other limited devices. Moreover, the effective revocation consent—considered as a privacy rule in sensitive scenarios—has not been fully addressed. This paper proposes an event-based mechanism empowering a new concept, the sleepyhead credentials, which allows to substitute time constraints and explicit revocation by activating and deactivating authorization rights according to events. Our approach is to integrate this concept in IdM systems in a hybrid model supporting delegation, which can be an interesting alternative for scenarios where revocation of consent and user privacy are critical. The delegation includes a SAML compliant protocol, which we have validated through a proof-of-concept implementation. This article also explains the mathematical model describing the event-based model and offers estimations of the overhead introduced by the system. The paper focus on health care scenarios, where we show the flexibility of the proposed event-based user consent revocation mechanism. PMID:22778634

  3. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  4. Heinrich events driven by feedback between ocean forcing and glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Bassis, J. N.; Petersen, S. V.; Cathles, L. M. M., IV

    2015-12-01

    One of the most puzzling glaciological features of the past ice age is the episodic discharge of large volumes of icebergs from the Laurentide Ice Sheet, known as Heinrich events. It has been suggested that Heinrich events are caused by internal instabilities in the ice sheet (e.g. the binge-purge oscillation). A purely ice dynamic cycle, however, is at odds with the fact that every Heinrich event occurs during the cold phase of a DO cycle, implying some regional climate connection. Recent work has pointed to subsurface water warming as a trigger for Heinrich events through increased basal melting of an ice shelf extending across the Hudson Strait and connecting with the Greenland Ice Sheet. Such a large ice shelf, spanning the deepest part of the Labrador Sea, has no modern analog and limited proxy evidence. Here we use a width averaged "flowline" model of the Hudson Strait ice stream to show that Heinrich events can be triggered by ocean forcing of a grounded terminus without the need for an ice shelf. At maximum ice extent, bed topography is depressed and the terminus is more sensitive to a subsurface thermal forcing. Once triggered, the retreat is rapid, and continues until isostatic rebound of the bed causes local sea level to drop sufficiently to arrest retreat. Topography slowly rebounds, decreasing the sensitivity to ocean forcing and the ice stream re-advances at a rate that is an order of magnitude slower than collapse. This simple feedback cycle between a short-lived ocean trigger and slower isostatic adjustment can reproduce the periodicity and timing of observed Heinrich events under a range of glaciological and solid earth parameters. Our results suggest that not only does the solid Earth play an important role in regulating ice sheet stability, but that grounded marine terminating portions of ice sheets may be more sensitive to ocean forcing than previously thought.

  5. Rare measurements of a sprite with halo event driven by a negative lightning discharge over Argentina

    USGS Publications Warehouse

    Taylor, M.J.; Bailey, M.A.; Pautet, P.D.; Cummer, S.A.; Jaugey, N.; Thomas, J.N.; Solorzano, N.N.; Sao, Sabbas F.; Holzworth, R.H.; Pinto, O.; Schuch, N.J.

    2008-01-01

    As part of a collaborative campaign to investigate Transient Lummous Events (TLEs) over South America, coordinated optical, ELF/VLF, and lightning measurements were made of a mesoscale thunderstorm observed on February 22-23, 2006 over northern Argentina that produced 445 TLEs within a ???6 hour period. Here, we report comprehensive measurements of one of these events, a sprite with halo that was unambiguously associated with a large negative cloud-to-ground (CG) lightning discharge with an impulsive vertical charge moment change (??MQv) of -503 C.km. This event was similar in its location, morphology and duration to other positive TLEs observed from this storm. However, the downward extent of the negative streamers was limited to 25 km, and their apparent brightness was lower than that of a comparable positive event. Observations of negative CG events are rare, and these measurements provide fin-ther evidence that sprites can be driven by upward as well as downward electric fields, as predicted by the conventional breakdown mechanism. Copyright 2008 by the American Geophysical Union.

  6. Stable algorithm for event detection in event-driven particle dynamics: logical states

    NASA Astrophysics Data System (ADS)

    Strobl, Severin; Bannerman, Marcus N.; Pöschel, Thorsten

    2016-07-01

    Following the recent development of a stable event-detection algorithm for hard-sphere systems, the implications of more complex interaction models are examined. The relative location of particles leads to ambiguity when it is used to determine the interaction state of a particle in stepped potentials, such as the square-well model. To correctly predict the next event in these systems, the concept of an additional state that is tracked separately from the particle position is introduced and integrated into the stable algorithm for event detection.

  7. Sonic hedgehog multimerization: a self-organizing event driven by post-translational modifications?

    PubMed

    Koleva, Mirella V; Rothery, Stephen; Spitaler, Martin; Neil, Mark A A; Magee, Anthony I

    2015-01-01

    Sonic hedgehog (Shh) is a morphogen active during vertebrate development and tissue homeostasis in adulthood. Dysregulation of the Shh signalling pathway is known to incite carcinogenesis. Due to the highly lipophilic nature of this protein imparted by two post-translational modifications, Shh's method of transit through the aqueous extracellular milieu has been a long-standing conundrum, prompting the proposition of numerous hypotheses to explain the manner of its displacement from the surface of the producing cell. Detection of high molecular-weight complexes of Shh in the intercellular environment has indicated that the protein achieves this by accumulating into multimeric structures prior to release from producing cells. The mechanism of assembly of the multimers, however, has hitherto remained mysterious and contentious. Here, with the aid of high-resolution optical imaging and post-translational modification mutants of Shh, we show that the C-terminal cholesterol and the N-terminal palmitate adducts contribute to the assembly of large multimers and regulate their shape. Moreover, we show that small Shh multimers are produced in the absence of any lipid modifications. Based on an assessment of the distribution of various dimensional characteristics of individual Shh clusters, in parallel with deductions about the kinetics of release of the protein from the producing cells, we conclude that multimerization is driven by self-assembly underpinned by the law of mass action. We speculate that the lipid modifications augment the size of the multimolecular complexes through prolonging their association with the exoplasmic membrane. PMID:26312641

  8. Event-driven nutrient dynamics in a southern Everglades mangrove creek

    NASA Astrophysics Data System (ADS)

    Holmes, C. W.; Robbins, J. A.; Reddy, K. R.; Newman, S.; Marot, M. E.; Davis, S. E.; Childers, D. L.; Cable, J.; Day, J. W.; Rudnick, D. T.; Sklar, F. H.

    2002-05-01

    Wind and precipitation events strongly influence the hydrodynamics of micro-tidal estuarine systems. These events can also have profound effects on the pulsing of materials, leading to enhanced primary and secondary production, especially in oligotrophic systems such as the Everglades and Florida Bay. Since 1996, we have been monitoring the nutrient and salinity content of surface water along Taylor River, a mangrove waterway of the southern Everglades. The U.S. Geological Survey has been making concurrent measurements of flow and stage at proximal sites. Over the past 5 years, there have been a number of meteorological events that have significantly affected south Florida. In this presentation, we highlight the effects of three major events as well as typical variability in concentrations and fluxes of materials. In November 1996, eight consecutive days of >40kt winds pushed freshwater out of the Everglades into Florida Bay. Concentrations of TN increased throughout this event while TP and inorganic N and P remained fairly constant. Immediately following this wind storm, there was a 6-fold increase in salinity as flow reversed. In September 1999, Tropical Storm Harvey dropped nearly 26 cm of precipitation in south Florida with negligible winds. Harvey caused TP concentrations to more than triple (from 1μ M to 3.8μ M) and discharge to increase by more than an order of magnitude. The following month, the eye of Hurricane Irene passed just west of Taylor River producing strong southerly winds in excess of 80 mph and more than 37 cm of precipitation. Like the wind event of 1996, Irene led to increased concentrations of TN and no observable change in TP. Irene also produced the highest discharge measured in this system (730,000 m3 d-1). These 3 events: wind, rain, and wind+rain exemplify the kinds of events common to this region. The effects of these events combined with a synthesis of long-term water quality and quarterly flux data indicate that the patterns of nutrient dynamics in this system are dependent upon the nature (i.e. type, intensity, duration) of each event. Such findings will be useful in understanding the effects of freshwater and nutrient pulsing into the Florida Bay estuary.

  9. Sequence-of-events-driven automation of the deep space network

    NASA Technical Reports Server (NTRS)

    Hill, R., Jr.; Fayyad, K.; Smyth, C.; Santos, T.; Chen, R.; Chien, S.; Bevan, R.

    1996-01-01

    In February 1995, sequence-of-events (SOE)-driven automation technology was demonstrated for a Voyager telemetry downlink track at DSS 13. This demonstration entailed automated generation of an operations procedure (in the form of a temporal dependency network) from project SOE information using artificial intelligence planning technology and automated execution of the temporal dependency network using the link monitor and control operator assistant system. This article describes the overall approach to SOE-driven automation that was demonstrated, identifies gaps in SOE definitions and project profiles that hamper automation, and provides detailed measurements of the knowledge engineering effort required for automation.

  10. Effects of climate events driven hydrodynamics on dissolved oxygen in a subtropical deep reservoir in Taiwan.

    PubMed

    Fan, Cheng-Wei; Kao, Shuh-Ji

    2008-04-15

    The seasonal concentrations of dissolved oxygen in a subtropical deep reservoir were studied over a period of one year. The study site was the Feitsui Reservoir in Taiwan. It is a dam-constructed reservoir with a surface area of 10.24 km(2) and a mean depth of 39.6 m, with a maximum depth of 113.5 m near the dam. It was found that certain weather and climate events, such as typhoons in summer and autumn, as well as cold fronts in winter, can deliver oxygen-rich water, and consequently have strong impacts on the dissolved oxygen level. The typhoon turbidity currents and winter density currents played important roles in supplying oxygen to the middle and bottom water, respectively. The whole process can be understood by the hydrodynamics driven by weather and climate events. This work provides the primary results of dissolved oxygen in a subtropical deep reservoir, and the knowledge is useful in understanding water quality in subtropical regions. PMID:18243280

  11. Event-driven sediment flux in Hueneme and Mugu submarine canyons, southern California

    USGS Publications Warehouse

    Xu, J. P.; Swarzenski, P.W.; Noble, M.; Li, A.-C.

    2010-01-01

    Vertical sediment fluxes and their dominant controlling processes in Hueneme and Mugu submarine canyons off south-central California were assessed using data from sediment traps and current meters on two moorings that were deployed for 6 months during the winter of 2007. The maxima of total particulate flux, which reached as high as 300+ g/m2/day in Hueneme Canyon, were recorded during winter storm events when high waves and river floods often coincided. During these winter storms, wave-induced resuspension of shelf sediment was a major source for the elevated sediment fluxes. Canyon rim morphology, rather than physical proximity to an adjacent river mouth, appeared to control the magnitude of sediment fluxes in these two submarine canyon systems. Episodic turbidity currents and internal bores enhanced sediment fluxes, particularly in the lower sediment traps positioned 30 m above the canyon floor. Lower excess 210Pb activities measured in the sediment samples collected during periods of peak total particulate flux further substantiate that reworked shelf-, rather than newly introduced river-borne, sediments supply most of the material entering these canyons during storms.

  12. An Integrated Cyberenvironment for Event-Driven Environmental Observatory Research and Education

    NASA Astrophysics Data System (ADS)

    Myers, J.; Minsker, B.; Butler, R.

    2006-12-01

    National environmental observatories will soon provide large-scale data from diverse sensor networks and community models. While much attention is focused on piping data from sensors to archives and users, truly integrating these resources into the everyday research activities of scientists and engineers across the community, and enabling their results and innovations to be brought back into the observatory, also critical to long-term success of the observatories, is often neglected. This talk will give an overview of the Environmental Cyberinfrastructure Demonstrator (ECID) Cyberenvironment for observatory-centric environmental research and education, under development at the National Center for Supercomputing Applications (NCSA), which is designed to address these issues. Cyberenvironments incorporate collaboratory and grid technologies, web services, and other cyberinfrastructure into an overall framework that balances needs for efficient coordination and the ability to innovate. They are designed to support the full scientific lifecycle both in terms of individual experiments moving from data to workflows to publication and at the macro level where new discoveries lead to additional data, models, tools, and conceptual frameworks that augment and evolve community-scale systems such as observatories. The ECID cyberenvironment currently integrates five major components a collaborative portal, workflow engine, event manager, metadata repository, and social network personalization capabilities - that have novel features inspired by the Cyberenvironment concept and enabling powerful environmental research scenarios. A summary of these components and the overall cyberenvironment will be given in this talk, while other posters will give details on several of the components. The summary will be presented within the context of environmental use case scenarios created in collaboration with researchers from the WATERS (WATer and Environmental Research Systems) Network, a joint National Science Foundation-funded initiative of the hydrology and environmental engineering communities. The use case scenarios include identifying sensor anomalies in point- and streaming sensor data and notifying data managers in near-real time; and referring users of data or data products (e.g., workflows, publications) to related data or data products.

  13. Relativistic electron precipitation events driven by electromagnetic ion-cyclotron waves

    SciTech Connect

    Khazanov, G. Sibeck, D.; Tel'nikhin, A.; Kronberg, T.

    2014-08-15

    We adopt a canonical approach to describe the stochastic motion of relativistic belt electrons and their scattering into the loss cone by nonlinear EMIC waves. The estimated rate of scattering is sufficient to account for the rate and intensity of bursty electron precipitation. This interaction is shown to result in particle scattering into the loss cone, forming ∼10 s microbursts of precipitating electrons. These dynamics can account for the statistical correlations between processes of energization, pitch angle scattering, and relativistic electron precipitation events, that are manifested on large temporal scales of the order of the diffusion time ∼tens of minutes.

  14. An event-driven phytoplankton bloom in southern Lake Michigan observed by satellite.

    SciTech Connect

    Lesht, B. M.; Stroud, J. R.; McCormick, M. J.; Fahnenstiel, G. L.; Stein, M. L.; Welty, L. J.; Leshkevich, G. A.; Environmental Research; Univ. of Chicago; Great Lakes Research Lab.

    2002-04-15

    Sea-viewing Wide Field-of-View Sensor (SeaWiFS) images from June 1998 show a surprising early summer phytoplankton bloom in southern Lake Michigan that accounted for approximately 25% of the lake's annual gross offshore algal primary production. By combining the satellite imagery with in situ measurements of water temperature and wind velocity we show that the bloom was triggered by a brief wind event that was sufficient to cause substantial vertical mixing even though the lake was already stratified. We conclude that episodic events can have significant effects on the biological state of large lakes and should be included in biogeochemical process models.

  15. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.

  16. Simulation Framework for Teaching in Modeling and Simulation Areas

    ERIC Educational Resources Information Center

    De Giusti, Marisa Raquel; Lira, Ariel Jorge; Villarreal, Gonzalo Lujan

    2008-01-01

    Simulation is the process of executing a model that describes a system with enough detail; this model has its entities, an internal state, some input and output variables and a list of processes bound to these variables. Teaching a simulation language such as general purpose simulation system (GPSS) is always a challenge, because of the way it…

  17. NFCSim: A Dynamic Fuel Burnup and Fuel Cycle Simulation Tool

    SciTech Connect

    Schneider, Erich A.; Bathke, Charles G.; James, Michael R.

    2005-07-15

    NFCSim is an event-driven, time-dependent simulation code modeling the flow of materials through the nuclear fuel cycle. NFCSim tracks mass flow at the level of discrete reactor fuel charges/discharges and logs the history of nuclear material as it progresses through a detailed series of processes and facilities, generating life-cycle material balances for any number of reactors. NFCSim is an ideal tool for analysis - of the economics, sustainability, or proliferation resistance - of nonequilibrium, interacting, or evolving reactor fleets. The software couples with a criticality and burnup engine, LACE (Los Alamos Criticality Engine). LACE implements a piecewise-linear, reactor-specific reactivity model for its criticality calculations. This model constructs fluence-dependent reactivity traces for any facility; it is designed to address nuclear economies in which either a steady state is never obtained or is a poor approximation. LACE operates in transient and equilibrium fuel management regimes at the refueling batch level, derives reactor- and cycle-dependent initial fuel compositions, and invokes ORIGEN2.x to carry out burnup calculations.

  18. Circuit simulation: some humbling thoughts

    SciTech Connect

    Wendt, Manfred; /Fermilab

    2006-01-01

    A short, very personal note on circuit simulation is presented. It does neither include theoretical background on circuit simulation, nor offers an overview of available software, but just gives some general remarks for a discussion on circuit simulator needs in context to the design and development of accelerator beam instrumentation circuits and systems.

  19. New Directions in Maintenance Simulation.

    ERIC Educational Resources Information Center

    Miller, Gary G.

    A two-phase effort was conducted to design and evaluate a maintenance simulator which incorporated state-of-the-art information in simulation and instructional technology. The particular equipment selected to be simulated was the 6883 Convert/Flight Controls Test Station. Phase I included a generalized block diagram of the computer-trainer, the…

  20. Simulating granular materials by energy minimization

    NASA Astrophysics Data System (ADS)

    Krijgsman, D.; Luding, S.

    2016-03-01

    Discrete element methods are extremely helpful in understanding the complex behaviors of granular media, as they give valuable insight into all internal variables of the system. In this paper, a novel discrete element method for performing simulations of granular media is presented, based on the minimization of the potential energy in the system. Contrary to most discrete element methods (i.e., soft-particle method, event-driven method, and non-smooth contact dynamics), the system does not evolve by (approximately) integrating Newtons equations of motion in time, but rather by searching for mechanical equilibrium solutions for the positions of all particles in the system, which is mathematically equivalent to locally minimizing the potential energy. The new method allows for the rapid creation of jammed initial conditions (to be used for further studies) and for the simulation of quasi-static deformation problems. The major advantage of the new method is that it allows for truly static deformations. The system does not evolve with time, but rather with the externally applied strain or load, so that there is no kinetic energy in the system, in contrast to other quasi-static methods. The performance of the algorithm for both types of applications of the method is tested. Therefore we look at the required number of iterations, for the system to converge to a stable solution. For each single iteration, the required computational effort scales linearly with the number of particles. During the process of creating initial conditions, the required number of iterations for two-dimensional systems scales with the square root of the number of particles in the system. The required number of iterations increases for systems closer to the jamming packing fraction. For a quasi-static pure shear deformation simulation, the results of the new method are validated by regular soft-particle dynamics simulations. The energy minimization algorithm is able to capture the evolution of the