Sample records for entire simulation time

  1. Monte Carlo simulation of efficient data acquisition for an entire-body PET scanner

    NASA Astrophysics Data System (ADS)

    Isnaini, Ismet; Obi, Takashi; Yoshida, Eiji; Yamaya, Taiga

    2014-07-01

    Conventional PET scanners can image the whole body using many bed positions. On the other hand, an entire-body PET scanner with an extended axial FOV, which can trace whole-body uptake images at the same time and improve sensitivity dynamically, has been desired. The entire-body PET scanner would have to process a large amount of data effectively. As a result, the entire-body PET scanner has high dead time at a multiplex detector grouping process. Also, the entire-body PET scanner has many oblique line-of-responses. In this work, we study an efficient data acquisition for the entire-body PET scanner using the Monte Carlo simulation. The simulated entire-body PET scanner based on depth-of-interaction detectors has a 2016-mm axial field-of-view (FOV) and an 80-cm ring diameter. Since the entire-body PET scanner has higher single data loss than a conventional PET scanner at grouping circuits, the NECR of the entire-body PET scanner decreases. But, single data loss is mitigated by separating the axially arranged detector into multiple parts. Our choice of 3 groups of axially-arranged detectors has shown to increase the peak NECR by 41%. An appropriate choice of maximum ring difference (MRD) will also maintain the same high performance of sensitivity and high peak NECR while at the same time reduces the data size. The extremely-oblique line of response for large axial FOV does not contribute much to the performance of the scanner. The total sensitivity with full MRD increased only 15% than that with about half MRD. The peak NECR was saturated at about half MRD. The entire-body PET scanner promises to provide a large axial FOV and to have sufficient performance values without using the full data.

  2. Air traffic control in airline pilot simulator training and evaluation

    DOT National Transportation Integrated Search

    2001-01-01

    Much airline pilot training and checking occurs entirely in the simulator, and the first time a pilot flies a particular airplane, it may carry passengers. Simulator qualification standards, however, focus on the simulation of the airplane without re...

  3. Technology Tips: Simulation with the TI-Nspire

    ERIC Educational Resources Information Center

    Rudolph, Heidi J.

    2009-01-01

    Simulation is an important learning tool that allows students to grasp probability concepts, especially when the actual scenario does not need to be replicated entirely. In the cases of tossing coins and rolling dice, gathering the data before analyzing them can be laborious and might be a waste of precious class time--time that might be better…

  4. Simulation and experiment of thermal fatigue in the CPV die attach

    NASA Astrophysics Data System (ADS)

    Bosco, Nick; Silverman, Timothy; Kurtz, Sarah

    2012-10-01

    FEM simulation and accelerated thermal cycling have been performed for the CPV die attach. Trends in fatigue damage accumulation and equivalent test time are explored and found to be most sensitive to temperature ramp rate. Die attach crack growth is measured through cycling and found to be in excellent agreement with simulations of the inelastic strain energy accumulated. Simulations of an entire year of weather data provides for the relative ranking of fatigue damage between four cities as well as their equivalent accelerated test time.

  5. Applying Reduced Generator Models in the Coarse Solver of Parareal in Time Parallel Power System Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan

    2016-01-01

    The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less

  6. Modeling laser-plasma acceleration in the laboratory frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-01-01

    A simulation of laser-plasma acceleration in the laboratory frame. Both the laser and the wakefield buckets must be resolved over the entire domain of the plasma, requiring many cells and many time steps. While researchers often use a simulation window that moves with the pulse, this reduces only the multitude of cells, not the multitude of time steps. For an artistic impression of how to solve the simulation by using the boosted-frame method, watch the video "Modeling laser-plasma acceleration in the wakefield frame".

  7. Avatars, Virtual Reality Technology, and the U.S. Military: Emerging Policy Issues

    DTIC Science & Technology

    2008-04-09

    called “ Sentient Worldwide Simulation,” which will “mirror” real life and automatically follow real-world events in real time. Some virtual world...cities, with the final goal of creating a fully functioning virtual model of the entire world, which will be known as the Sentient Worldwide Simulation

  8. Real-time Simulation of Turboprop Engine Control System

    NASA Astrophysics Data System (ADS)

    Sheng, Hanlin; Zhang, Tianhong; Zhang, Yi

    2017-05-01

    On account of the complexity of turboprop engine control system, real-time simulation is the technology, under the prerequisite of maintaining real-time, to effectively reduce development cost, shorten development cycle and avert testing risks. The paper takes RT-LAB as a platform and studies the real-time digital simulation of turboprop engine control system. The architecture, work principles and external interfaces of RT-LAB real-time simulation platform are introduced firstly. Then based on a turboprop engine model, the control laws of propeller control loop and fuel control loop are studied. From that and on the basis of Matlab/Simulink, an integrated controller is designed which can realize the entire process control of the engine from start-up to maximum power till stop. At the end, on the basis of RT-LAB platform, the real-time digital simulation of the designed control system is studied, different regulating plans are tried and more ideal control effects have been obtained.

  9. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  10. Implementation of an open-scenario, long-term space debris simulation approach

    NASA Astrophysics Data System (ADS)

    Stupl, J.; Nelson, B.; Faber, N.; Perez, A.; Carlino, R.; Yang, F.; Henze, C.; Karacalioglu, A.; O'Toole, C.; Swenson, J.

    This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance scheme. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps in the order of several (5-15) days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions, space object parameters and orbital parameters of the conjunctions and take place in much smaller timeframes than 5-15 days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in LEO, propagates all objects with high precision, and advances with variable-sized time-steps as small as one second. It allows the assessment of the (potential) impact of changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space-track object catalog in LEO. We then use a high precision propagator to propagate all objects over the entire simulation duration. If collisions are detected, the appropriate number of debris objects are created and inserted into the simulation framework. Depending on the scenario, further objects, e.g. due to new launches, can be added. At the end of the simulation, the total number of objects above a cut-off size and the number of detected collisions provide benchmark parameters for the comparison between scenarios. The simulation approach is computationally intensive as it involves ten thousands of objects; hence we use a highly parallel approach employing up to a thousand cores on the NASA Pleiades supercomputer for a single run. This paper describes our simulation approach, the status of its implementation, the approach in developing scenarios and examples of first test runs.

  11. Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach

    NASA Technical Reports Server (NTRS)

    Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan

    2015-01-01

    This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space-track object catalog in LEO. We then use a high precision propagator to propagate all objects over the entire simulation duration. If collisions are detected, the appropriate number of debris objects are created and inserted into the simulation framework. Depending on the scenario, further objects, e.g. due to new launches, can be added. At the end of the simulation, the total number of objects above a cut-off size and the number of detected collisions provide benchmark parameters for the comparison between scenarios. The simulation approach is computationally intensive as it involves tens of thousands of objects; hence we use a highly parallel approach employing up to a thousand cores on the NASA Pleiades supercomputer for a single run. This paper describes our simulation approach, the status of its implementation, the approach to developing scenarios and examples of first test runs.

  12. Toward a Time-Domain Fractal Lightning Simulation

    NASA Astrophysics Data System (ADS)

    Liang, C.; Carlson, B. E.; Lehtinen, N. G.; Cohen, M.; Lauben, D.; Inan, U. S.

    2010-12-01

    Electromagnetic simulations of lightning are useful for prediction of lightning properties and exploration of the underlying physical behavior. Fractal lightning models predict the spatial structure of the discharge, but thus far do not provide much information about discharge behavior in time and therefore cannot predict electromagnetic wave emissions or current characteristics. Here we develop a time-domain fractal lightning simulation from Maxwell's equations, the method of moments with the thin wire approximation, an adaptive time-stepping scheme, and a simplified electrical model of the lightning channel. The model predicts current pulse structure and electromagnetic wave emissions and can be used to simulate the entire duration of a lightning discharge. The model can be used to explore the electrical characteristics of the lightning channel, the temporal development of the discharge, and the effects of these characteristics on observable electromagnetic wave emissions.

  13. Computer Assisted Exercises - Background

    DTIC Science & Technology

    2003-06-01

    standard JSAF interface devices. As a result of this HITL capability, Red and Blue engaged in real-time, dynamic free - play . Further, JSAF permitted...Red- vs.-Blue, free play , entity-level synthetic battlespace. JSAF simulates warfare at the platform level. JSAF simulates the entire range of...works to ensure the free play of events maintains a course that serves the overall objectives. 2-34 This slide has been deliberately left blank

  14. A 3D simulation look-up library for real-time airborne gamma-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Kulisek, Jonathan A.; Wittman, Richard S.; Miller, Erin A.; Kernan, Warnick J.; McCall, Jonathon D.; McConn, Ron J.; Schweppe, John E.; Seifert, Carolyn E.; Stave, Sean C.; Stewart, Trevor N.

    2018-01-01

    A three-dimensional look-up library consisting of simulated gamma-ray spectra was developed to leverage, in real-time, the abundance of data provided by a helicopter-mounted gamma-ray detection system consisting of 92 CsI-based radiation sensors and exhibiting a highly angular-dependent response. We have demonstrated how this library can be used to help effectively estimate the terrestrial gamma-ray background, develop simulated flight scenarios, and to localize radiological sources. Source localization accuracy was significantly improved, particularly for weak sources, by estimating the entire gamma-ray spectra while accounting for scattering in the air, and especially off the ground.

  15. Multi-scale simulations of droplets in generic time-dependent flows

    NASA Astrophysics Data System (ADS)

    Milan, Felix; Biferale, Luca; Sbragaglia, Mauro; Toschi, Federico

    2017-11-01

    We study the deformation and dynamics of droplets in time-dependent flows using a diffuse interface model for two immiscible fluids. The numerical simulations are at first benchmarked against analytical results of steady droplet deformation, and further extended to the more interesting case of time-dependent flows. The results of these time-dependent numerical simulations are compared against analytical models available in the literature, which assume the droplet shape to be an ellipsoid at all times, with time-dependent major and minor axis. In particular we investigate the time-dependent deformation of a confined droplet in an oscillating Couette flow for the entire capillary range until droplet break-up. In this way these multi component simulations prove to be a useful tool to establish from ``first principles'' the dynamics of droplets in complex flows involving multiple scales. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 642069. & European Research Council under the European Community's Seventh Framework Program, ERC Grant Agreement No 339032.

  16. Fast Whole-Engine Stirling Analysis

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2005-01-01

    An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.

  17. Fast Whole-Engine Stirling Analysis

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2007-01-01

    An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.

  18. Simulated tsunami run-up amplification factors around Penang Island for preliminary risk assessment

    NASA Astrophysics Data System (ADS)

    Lim, Yong Hui; Kh'ng, Xin Yi; Teh, Su Yean; Koh, Hock Lye; Tan, Wai Kiat

    2017-08-01

    The mega-tsunami Andaman that struck Malaysia on 26 December 2004 affected 200 kilometers of northwest Peninsular Malaysia coastline from Perlis to Selangor. It is anticipated by the tsunami scientific community that the next mega-tsunami is due to occur any time soon. This rare catastrophic event has awakened the attention of Malaysian government to take appropriate risk reduction measures, including timely and orderly evacuation. To effectively evacuate ordinary citizens to a safe ground or a nearest designated emergency shelter, a well prepared evacuation route is essential with the estimated tsunami run-up heights and inundation distances on land clearly indicated on the evacuation map. The run-up heights and inundation distances are simulated by an in-house model 2-D TUNA-RP based upon credible scientific tsunami source scenarios derived from tectonic activity around the region. To provide a useful tool for estimating the run-up heights along the entire coast of Penang Island, we computed tsunami amplification factors based upon 2-D TUNA-RP model simulations in this paper. The inundation map and run-up amplification factors in six domains along the entire coastline of Penang Island are provided. The comparison between measured tsunami wave heights for the 2004 Andaman tsunami and TUNA-RP model simulated values demonstrates good agreement.

  19. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    PubMed

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  20. Numerical simulation of two consecutive nasal respiratory cycles: toward a better understanding of nasal physiology.

    PubMed

    de Gabory, Ludovic; Reville, Nicolas; Baux, Yannick; Boisson, Nicolas; Bordenave, Laurence

    2018-01-16

    Computational fluid dynamic (CFD) simulations have greatly improved the understanding of nasal physiology. We postulate that simulating the entire and repeated respiratory nasal cycles, within the whole sinonasal cavities, is mandatory to gather more accurate observations and better understand airflow patterns. A 3-dimensional (3D) sinonasal model was constructed from a healthy adult computed tomography (CT) scan which discretized in 6.6 million cells (mean volume, 0.008 mm 3 ). CFD simulations were performed with ANSYS©FluentTMv16.0.0 software with transient and turbulent airflow (k-ω model). Two respiratory cycles (8 seconds) were simulated to assess pressure, velocity, wall shear stress, and particle residence time. The pressure gradients within the sinus cavities varied according to their place of connection to the main passage. Alternations in pressure gradients induced a slight pumping phenomenon close to the ostia but no movement of air was observed within the sinus cavities. Strong movements were observed within the inferior meatus during expiration contrary to the inspiration, as in the olfactory cleft at the same time. Particle residence time was longer during expiration than inspiration due to nasal valve resistance, as if the expiratory phase was preparing the next inspiratory phase. Throughout expiration, some particles remained in contact with the lower turbinates. The posterior part of the olfactory cleft was gradually filled with particles that did not leave the nose at the next respiratory cycle. This pattern increased as the respiratory cycle was repeated. CFD is more efficient and reliable when the entire respiratory cycle is simulated and repeated to avoid losing information. © 2018 ARS-AAOA, LLC.

  1. Initial Development of a Quadcopter Simulation Environment for Auralization

    NASA Technical Reports Server (NTRS)

    Christian, Andrew; Lawrence, Joseph

    2016-01-01

    This paper describes a recently created computer simulation of quadcopter flight dynamics for the NASA DELIVER project. The goal of this effort is to produce a simulation that includes a number of physical effects that are not usually found in other dynamics simulations (e.g., those used for flight controller development). These effects will be shown to have a significant impact on the fidelity of auralizations - entirely synthetic time-domain predictions of sound - based on this simulation when compared to a recording. High-fidelity auralizations are an important precursor to human subject tests that seek to understand the impact of vehicle configurations on noise and annoyance.

  2. Voltage controlled current source

    DOEpatents

    Casne, Gregory M.

    1992-01-01

    A seven decade, voltage controlled current source is described for use in testing intermediate range nuclear instruments that covers the entire test current range of from 10 picoamperes to 100 microamperes. High accuracy is obtained throughout the entire seven decades of output current with circuitry that includes a coordinated switching scheme responsive to the input signal from a hybrid computer to control the input voltage to an antilog amplifier, and to selectively connect a resistance to the antilog amplifier output to provide a continuous output current source as a function of a preset range of input voltage. An operator controlled switch provides current adjustment for operation in either a real-time simulation test mode or a time response test mode.

  3. Numerical simulation and nasal air-conditioning

    PubMed Central

    Keck, Tilman; Lindemann, Jörg

    2011-01-01

    Heating and humidification of the respiratory air are the main functions of the nasal airways in addition to cleansing and olfaction. Optimal nasal air conditioning is mandatory for an ideal pulmonary gas exchange in order to avoid desiccation and adhesion of the alveolar capillary bed. The complex three-dimensional anatomical structure of the nose makes it impossible to perform detailed in vivo studies on intranasal heating and humidification within the entire nasal airways applying various technical set-ups. The main problem of in vivo temperature and humidity measurements is a poor spatial and time resolution. Therefore, in vivo measurements are feasible only to a restricted extent, solely providing single temperature values as the complete nose is not entirely accessible. Therefore, data on the overall performance of the nose are only based on one single measurement within each nasal segment. In vivo measurements within the entire nose are not feasible. These serious technical issues concerning in vivo measurements led to a large number of numerical simulation projects in the last few years providing novel information about the complex functions of the nasal airways. In general, numerical simulations merely calculate predictions in a computational model, e.g. a realistic nose model, depending on the setting of the boundary conditions. Therefore, numerical simulations achieve only approximations of a possible real situation. The aim of this review is the synopsis of the technical expertise on the field of in vivo nasal air conditioning, the novel information of numerical simulations and the current state of knowledge on the influence of nasal and sinus surgery on nasal air conditioning. PMID:22073112

  4. [Simulation and air-conditioning in the nose].

    PubMed

    Keck, T; Lindemann, J

    2010-05-01

    Heating and humidification of the respiratory air are the main functions of the nasal airways in addition to cleansing and olfaction. Optimal nasal air conditioning is mandatory for an ideal pulmonary gas exchange in order to avoid dessication and adhesion of the alveolar capillary bed. The complex three-dimensional anatomical structure of the nose makes it impossible to perform detailed in vivo studies on intranasal heating and humidification within the entire nasal airways applying various technical set-ups. The main problem of in vivo temperature and humidity measurements is a poor spatial and time resolution. Therefore, in vivo measurements are feasible to a restricted extent, only providing single temperature values as the complete nose is not entirely accessible. Therefore, data on the overall performance of the nose are only based on one single measurement within each nasal segment. In vivo measurements within the entire nose are not feasible. These serious technical issues concerning in vivo measurements led to a large number of numerical simulation projects in the last few years providing novel information about the complex functions of the nasal airways. In general, numerical simulations only calculate predictions in a computational model, e. g. realistic nose model, depending on the setting of the boundary conditions. Therefore, numerical simulations achieve only approximations of a possible real situation. The aim of this report is the synopsis of the technical expertise on the field of in vivo nasal air conditioning, the novel information of numerical simulations and the current state of knowledge on the influence of nasal and sinus surgery on nasal air conditioning.

  5. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, M.; Wieseman, C. D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  6. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  7. Downlink Probability Density Functions for EOS-McMurdo Sound

    NASA Technical Reports Server (NTRS)

    Christopher, P.; Jackson, A. H.

    1996-01-01

    The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.

  8. Simulation of profile evolution from ramp-up to ramp-down and optimization of tokamak plasma termination with the RAPTOR code

    NASA Astrophysics Data System (ADS)

    Teplukhina, A. A.; Sauter, O.; Felici, F.; Merle, A.; Kim, D.; the TCV Team; the ASDEX Upgrade Team; the EUROfusion MST1 Team

    2017-12-01

    The present work demonstrates the capabilities of the transport code RAPTOR as a fast and reliable simulator of plasma profiles for the entire plasma discharge, i.e. from ramp-up to ramp-down. This code focuses, at this stage, on the simulation of electron temperature and poloidal flux profiles using prescribed equilibrium and some kinetic profiles. In this work we extend the RAPTOR transport model to include a time-varying plasma equilibrium geometry and verify the changes via comparison with ATSRA code simulations. In addition a new ad hoc transport model based on constant gradients and suitable for simulations of L-H and H-L mode transitions has been incorporated into the RAPTOR code and validated with rapid simulations of the time evolution of the safety factor and the electron temperature over the entire AUG and TCV discharges. An optimization procedure for the plasma termination phase has also been developed during this work. We define the goal of the optimization as ramping down the plasma current as fast as possible while avoiding any disruptions caused by reaching physical or technical limits. Our numerical study of this problem shows that a fast decrease of plasma elongation during current ramp-down can help in reducing plasma internal inductance. An early transition from H- to L-mode allows us to reduce the drop in poloidal beta, which is also important for plasma MHD stability and control. This work shows how these complex nonlinear interactions can be optimized automatically using relevant cost functions and constraints. Preliminary experimental results for TCV are demonstrated.

  9. Lap time simulation and design optimisation of a brushed DC electric motorcycle for the Isle of Man TT Zero Challenge

    NASA Astrophysics Data System (ADS)

    Dal Bianco, N.; Lot, R.; Matthys, K.

    2018-01-01

    This works regards the design of an electric motorcycle for the annual Isle of Man TT Zero Challenge. Optimal control theory was used to perform lap time simulation and design optimisation. A bespoked model was developed, featuring 3D road topology, vehicle dynamics and electric power train, composed of a lithium battery pack, brushed DC motors and motor controller. The model runs simulations over the entire ? or ? of the Snaefell Mountain Course. The work is validated using experimental data from the BX chassis of the Brunel Racing team, which ran during the 2009 to 2015 TT Zero races. Optimal control is used to improve drive train and power train configurations. Findings demonstrate computational efficiency, good lap time prediction and design optimisation potential, achieving a 2 minutes reduction of the reference lap time through changes in final drive gear ratio, battery pack size and motor configuration.

  10. The Million-Body Problem: Particle Simulations in Astrophysics

    ScienceCinema

    Rasio, Fred

    2018-05-21

    Computer simulations using particles play a key role in astrophysics. They are widely used to study problems across the entire range of astrophysical scales, from the dynamics of stars, gaseous nebulae, and galaxies, to the formation of the largest-scale structures in the universe. The 'particles' can be anything from elementary particles to macroscopic fluid elements, entire stars, or even entire galaxies. Using particle simulations as a common thread, this talk will present an overview of computational astrophysics research currently done in our theory group at Northwestern. Topics will include stellar collisions and the gravothermal catastrophe in dense star clusters.

  11. Numerical simulation of plasma response to externally applied resonant magnetic perturbation on the J-TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Bicheng, LI; Zhonghe, JIANG; Jian, LV; Xiang, LI; Bo, RAO; Yonghua, DING

    2018-05-01

    Nonlinear magnetohydrodynamic (MHD) simulations of an equilibrium on the J-TEXT tokamak with applied resonant magnetic perturbations (RMPs) are performed with NIMROD (non-ideal MHD with rotation, open discussion). Numerical simulation of plasma response to RMPs has been developed to investigate magnetic topology, plasma density and rotation profile. The results indicate that the pure applied RMPs can stimulate 2/1 mode as well as 3/1 mode by the toroidal mode coupling, and finally change density profile by particle transport. At the same time, plasma rotation plays an important role during the entire evolution process.

  12. Expanded Processing Techniques for EMI Systems

    DTIC Science & Technology

    2012-07-01

    possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and mapping...possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and...54! Figure 4.25: Plots of simulated MetalMapper data for two oblate spheroidal targets

  13. Atomistic insights into the nanosecond long amorphization and crystallization cycle of nanoscale G e2S b2T e5 : An ab initio molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Branicio, Paulo S.; Bai, Kewu; Ramanarayan, H.; Wu, David T.; Sullivan, Michael B.; Srolovitz, David J.

    2018-04-01

    The complete process of amorphization and crystallization of the phase-change material G e2S b2T e5 is investigated using nanosecond ab initio molecular dynamics simulations. Varying the quench rate during the amorphization phase of the cycle results in the generation of a variety of structures from entirely crystallized (-0.45 K/ps) to entirely amorphized (-16 K/ps). The 1.5-ns annealing simulations indicate that the crystallization process depends strongly on both the annealing temperature and the initial amorphous structure. The presence of crystal precursors (square rings) in the amorphous matrix enhances nucleation/crystallization kinetics. The simulation data are used to construct a combined continuous-cooling-transformation (CCT) and temperature-time-transformation (TTT) diagram. The nose of the CCT-TTT diagram corresponds to the minimum time for the onset of homogenous crystallization and is located at 600 K and 70 ps. That corresponds to a critical cooling rate for amorphization of -4.5 K/ps. The results, in excellent agreement with experimental observations, suggest that a strategy that utilizes multiple quench rates and annealing temperatures may be used to effectively optimize the reversible switching speed and enable fast and energy-efficient phase-change memories.

  14. Extensions to the Dynamic Aerospace Vehicle Exchange Markup Language

    NASA Technical Reports Server (NTRS)

    Brian, Geoffrey J.; Jackson, E. Bruce

    2011-01-01

    The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) is a syntactical language for exchanging flight vehicle dynamic model data. It provides a framework for encoding entire flight vehicle dynamic model data packages for exchange and/or long-term archiving. Version 2.0.1 of DAVE-ML provides much of the functionality envisioned for exchanging aerospace vehicle data; however, it is limited in only supporting scalar time-independent data. Additional functionality is required to support vector and matrix data, abstracting sub-system models, detailing dynamics system models (both discrete and continuous), and defining a dynamic data format (such as time sequenced data) for validation of dynamics system models and vehicle simulation packages. Extensions to DAVE-ML have been proposed to manage data as vectors and n-dimensional matrices, and record dynamic data in a compatible form. These capabilities will improve the clarity of data being exchanged, simplify the naming of parameters, and permit static and dynamic data to be stored using a common syntax within a single file; thereby enhancing the framework provided by DAVE-ML for exchanging entire flight vehicle dynamic simulation models.

  15. Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.

    PubMed

    Parton, Daniel L; Grinaway, Patrick B; Hanson, Sonya M; Beauchamp, Kyle A; Chodera, John D

    2016-06-01

    The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs)-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.

  16. Parallel network simulations with NEURON.

    PubMed

    Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L

    2006-10-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.

  17. Parallel Network Simulations with NEURON

    PubMed Central

    Migliore, M.; Cannia, C.; Lytton, W.W; Markram, Henry; Hines, M. L.

    2009-01-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored. PMID:16732488

  18. Evaluation of Two Unique Side Stick Controllers in a Fixed-Base Flight Simulator

    NASA Technical Reports Server (NTRS)

    Mayer, Jann; Cox, Timothy H.

    2003-01-01

    A handling qualities analysis has been performed on two unique side stick controllers in a fixed-base F-18 flight simulator. Each stick, which uses a larger range of motion than is common for similar controllers, has a moving elbow cup that accommodates movement of the entire arm for control. The sticks are compared to the standard center stick in several typical fighter aircraft tasks. Several trends are visible in the time histories, pilot ratings, and pilot comments. The aggressive pilots preferred the center stick, because the side sticks are underdamped, causing overshoots and oscillations when large motions are executed. The less aggressive pilots preferred the side sticks, because of the smooth motion and low breakout forces. The aggressive pilots collectively gave the worst ratings, probably because of increased sensitivity of the simulator (compared to the actual F-18 aircraft), which can cause pilot-induced oscillations when aggressive inputs are made. Overall, the elbow cup is not a positive feature, because using the entire arm for control inhibits precision. Pilots had difficulty measuring their performance, particularly during the offset landing task, and tended to overestimate.

  19. Polynomial-time quantum algorithm for the simulation of chemical dynamics

    PubMed Central

    Kassal, Ivan; Jordan, Stephen P.; Love, Peter J.; Mohseni, Masoud; Aspuru-Guzik, Alán

    2008-01-01

    The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born–Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits. PMID:19033207

  20. Molecular dynamics analysis of transitions between rotational isomers in polymethylene

    NASA Astrophysics Data System (ADS)

    Zúñiga, Ignacio; Bahar, Ivet; Dodge, Robert; Mattice, Wayne L.

    1991-10-01

    Molecular dynamics trajectories have been computed and analyzed for linear chains, with sizes ranging from C10H22 to C100H202, and for cyclic C100H200. All hydrogen atoms are included discretely. All bond lengths, bond angles, and torsion angles are variable. Hazard plots show a tendency, at very short times, for correlations between rotational isomeric transitions at bond i and i±2, in much the same manner as in the Brownian dynamics simulations reported by Helfand and co-workers. This correlation of next nearest neighbor bonds in isolated polyethylene chains is much weaker than the correlation found for next nearest neighbor CH-CH2 bonds in poly(1,4-trans-butadiene) confined to the channel formed by crystalline perhydrotriphenylene [Dodge and Mattice, Macromolecules 24, 2709 (1991)]. Less than half of the rotational isomeric transitions observed in the entire trajectory for C50H102 can be described as strongly coupled next nearest neighbor transitions. If correlated motions are identified with successive transitions, which occur within a time interval of Δt≤1 ps, only 18% of the transitions occur through cooperative motion of bonds i and i±2. An analysis of the entire data set of 2482 rotational isomeric state transitions, observed in a 3.7 ns trajectory for C50H102 at 400 K, was performed using a formalism that treats the transitions at different bonds as being independent. On time scales of 0.1 ns or longer, the analysis based on independent bonds accounts reasonably well for the results from the molecular dynamics simulations. At shorter times the molecular dynamics simulation reveals a higher mobility than implied by the analysis assuming independent bonds, presumably due to the influence of correlations that are important at shorter times.

  1. Need for speed: An optimized gridding approach for spatially explicit disease simulations.

    PubMed

    Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom

    2018-04-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.

  2. Need for speed: An optimized gridding approach for spatially explicit disease simulations

    PubMed Central

    Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom

    2018-01-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574

  3. Thermophysical properties of energetic ionic liquids/nitric acid mixtures: Insights from molecular dynamics simulationsa)

    NASA Astrophysics Data System (ADS)

    Hooper, Justin B.; Smith, Grant D.; Bedrov, Dmitry

    2013-09-01

    Molecular dynamics (MD) simulations of mixtures of the room temperature ionic liquids (ILs) 1-butyl-4-methyl imidazolium [BMIM]/dicyanoamide [DCA] and [BMIM][NO3-] with HNO3 have been performed utilizing the polarizable, quantum chemistry based APPLE&P® potential. Experimentally it has been observed that [BMIM][DCA] exhibits hypergolic behavior when mixed with HNO3 while [BMIM][NO3-] does not. The structural, thermodynamic, and transport properties of the IL/HNO3 mixtures have been determined from equilibrium MD simulations over the entire composition range (pure IL to pure HNO3) based on bulk simulations. Additional (non-equilibrium) simulations of the composition profile for IL/HNO3 interfaces as a function of time have been utilized to estimate the composition dependent mutual diffusion coefficients for the mixtures. The latter have been employed in continuum-level simulations in order to examine the nature (composition and width) of the IL/HNO3 interfaces on the millisecond time scale.

  4. Performance simulation for the design of solar heating and cooling systems

    NASA Technical Reports Server (NTRS)

    Mccormick, P. O.

    1975-01-01

    Suitable approaches for evaluating the performance and the cost of a solar heating and cooling system are considered, taking into account the value of a computer simulation concerning the entire system in connection with the large number of parameters involved. Operational relations concerning the collector efficiency in the case of a new improved collector and a reference collector are presented in a graph. Total costs for solar and conventional heating, ventilation, and air conditioning systems as a function of time are shown in another graph.

  5. Decrements in knee extensor and flexor strength are associated with performance fatigue during simulated basketball game-play in adolescent, male players.

    PubMed

    Scanlan, Aaron T; Fox, Jordan L; Borges, Nattai R; Delextrat, Anne; Spiteri, Tania; Dalbo, Vincent J; Stanton, Robert; Kean, Crystal O

    2018-04-01

    This study quantified lower-limb strength decrements and assessed the relationships between strength decrements and performance fatigue during simulated basketball. Ten adolescent, male basketball players completed a circuit-based, basketball simulation. Sprint and jump performance were assessed during each circuit, with knee flexion and extension peak concentric torques measured at baseline, half-time, and full-time. Decrement scores were calculated for all measures. Mean knee flexor strength decrement was significantly (P < 0.05) related to sprint fatigue in the first half (R = 0.65), with dominant knee flexor strength (R = 0.67) and dominant flexor:extensor strength ratio (R = 0.77) decrement significantly (P < 0.05) associated with sprint decrement across the entire game. Mean knee extensor strength (R = 0.71), dominant knee flexor strength (R = 0.80), non-dominant knee flexor strength (R = 0.75), mean knee flexor strength (R = 0.81), non-dominant flexor:extensor strength ratio (R = 0.71), and mean flexor:extensor strength ratio (R = 0.70) decrement measures significantly (P < 0.05) influenced jump fatigue during the entire game. Lower-limb strength decrements may exert an important influence on performance fatigue during basketball activity in adolescent, male players. Consequently, training plans should aim to mitigate lower-limb fatigue to optimise sprint and jump performance during game-play.

  6. Modeling and analysis of hybrid pixel detector deficiencies for scientific applications

    NASA Astrophysics Data System (ADS)

    Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman

    2015-08-01

    Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long. A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.

    Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long.more » A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.« less

  8. Effect of antacids on predicted steady-state cimetidine concentrations.

    PubMed

    Russell, W L; Lopez, L M; Normann, S A; Doering, P L; Guild, R T

    1984-05-01

    The purpose of this study was to evaluate effects of antacids on predicted steady-state concentrations of cimetidine. Ten healthy volunteers received in random order one week apart, cimetidine and cimetidine and antacid suspension. Blood was obtained at specified times and analyzed for cimetidine. Bioavailability was assessed by comparison of peak concentration, time to peak concentration, area under the curve, and time spent over 0.5 micrograms/ml. Single-dose data were extrapolated to steady-state using computer simulation. Concurrent administration of antacid suspension reduced parameters of bioavailability approximately 30%. When steady-state conditions were simulated, concentrations of cimetidine greater than or equal to 0.5 micrograms/ml were maintained for the entire dosing interval in seven of 10 subjects. These data suggest that temporal separation of cimetidine and antacid suspension may be unnecessary.

  9. An expert system for estimating production rates and costs for hardwood group-selection harvests

    Treesearch

    Chris B. LeDoux; B. Gopalakrishnan; R. S. Pabba

    2003-01-01

    As forest managers shift their focus from stands to entire ecosystems alternative harvesting methods such as group selection are being used increasingly. Results of several field time and motion studies and simulation runs were incorporated into an expert system for estimating production rates and costs associated with harvests of group-selection units of various size...

  10. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  11. Time-Dependent Simulations of Turbopump Flows

    NASA Technical Reports Server (NTRS)

    Kris, Cetin C.; Kwak, Dochan

    2001-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort will provide developers with information such as transient flow phenomena at start up, impact of non-uniform inflows, system vibration and impact on the structure. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Time-accuracy of the scheme has been evaluated with simple test cases. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 2000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.

  12. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  13. Multibody Modeling and Simulation for the Mars Phoenix Lander Entry, Descent and Landing

    NASA Technical Reports Server (NTRS)

    Queen, Eric M.; Prince, Jill L.; Desai, Prasun N.

    2008-01-01

    A multi-body flight simulation for the Phoenix Mars Lander has been developed that includes high fidelity six degree-of-freedom rigid-body models for the parachute and lander system. The simulation provides attitude and rate history predictions of all bodies throughout the flight, as well as loads on each of the connecting lines. In so doing, a realistic behavior of the descending parachute/lander system dynamics can be simulated that allows assessment of the Phoenix descent performance and identification of potential sensitivities for landing. This simulation provides a complete end-to-end capability of modeling the entire entry, descent, and landing sequence for the mission. Time histories of the parachute and lander aerodynamic angles are presented. The response of the lander system to various wind models and wind shears is shown to be acceptable. Monte Carlo simulation results are also presented.

  14. Simulation of lithium ion battery replacement in a battery pack for application in electric vehicles

    NASA Astrophysics Data System (ADS)

    Mathew, M.; Kong, Q. H.; McGrory, J.; Fowler, M.

    2017-05-01

    The design and optimization of the battery pack in an electric vehicle (EV) is essential for continued integration of EVs into the global market. Reconfigurable battery packs are of significant interest lately as they allow for damaged cells to be removed from the circuit, limiting their impact on the entire pack. This paper provides a simulation framework that models a battery pack and examines the effect of replacing damaged cells with new ones. The cells within the battery pack vary stochastically and the performance of the entire pack is evaluated under different conditions. The results show that by changing out cells in the battery pack, the state of health of the pack can be consistently maintained above a certain threshold value selected by the user. In situations where the cells are checked for replacement at discrete intervals, referred to as maintenance event intervals, it is found that the length of the interval is dependent on the mean time to failure of the individual cells. The simulation framework as well as the results from this paper can be utilized to better optimize lithium ion battery pack design in EVs and make long term deployment of EVs more economically feasible.

  15. Simulation of the M13 life cycle I: Assembly of a genetically-structured deterministic chemical kinetic simulation.

    PubMed

    Smeal, Steven W; Schmitt, Margaret A; Pereira, Ronnie Rodrigues; Prasad, Ashok; Fisk, John D

    2017-01-01

    To expand the quantitative, systems level understanding and foster the expansion of the biotechnological applications of the filamentous bacteriophage M13, we have unified the accumulated quantitative information on M13 biology into a genetically-structured, experimentally-based computational simulation of the entire phage life cycle. The deterministic chemical kinetic simulation explicitly includes the molecular details of DNA replication, mRNA transcription, protein translation and particle assembly, as well as the competing protein-protein and protein-nucleic acid interactions that control the timing and extent of phage production. The simulation reproduces the holistic behavior of M13, closely matching experimentally reported values of the intracellular levels of phage species and the timing of events in the M13 life cycle. The computational model provides a quantitative description of phage biology, highlights gaps in the present understanding of M13, and offers a framework for exploring alternative mechanisms of regulation in the context of the complete M13 life cycle. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Time-Variable Transit Time Distributions in the Hyporheic Zone of a Headwater Mountain Stream

    NASA Astrophysics Data System (ADS)

    Ward, Adam S.; Schmadel, Noah M.; Wondzell, Steven M.

    2018-03-01

    Exchange of water between streams and their hyporheic zones is known to be dynamic in response to hydrologic forcing, variable in space, and to exist in a framework with nested flow cells. The expected result of heterogeneous geomorphic setting, hydrologic forcing, and between-feature interaction is hyporheic transit times that are highly variable in both space and time. Transit time distributions (TTDs) are important as they reflect the potential for hyporheic processes to impact biogeochemical transformations and ecosystems. In this study we simulate time-variable transit time distributions based on dynamic vertical exchange in a headwater mountain stream with observed, heterogeneous step-pool morphology. Our simulations include hyporheic exchange over a 600 m river corridor reach driven by continuously observed, time-variable hydrologic conditions for more than 1 year. We found that spatial variability at an instance in time is typically larger than temporal variation for the reach. Furthermore, we found reach-scale TTDs were marginally variable under all but the most extreme hydrologic conditions, indicating that TTDs are highly transferable in time. Finally, we found that aggregation of annual variation in space and time into a "master TTD" reasonably represents most of the hydrologic dynamics simulated, suggesting that this aggregation approach may provide a relevant basis for scaling from features or short reaches to entire networks.

  17. Persistence of initial conditions in continental scale air quality ...

    EPA Pesticide Factsheets

    This study investigates the effect of initial conditions (IC) for pollutant concentrations in the atmosphere and soil on simulated air quality for two continental-scale Community Multiscale Air Quality (CMAQ) model applications. One of these applications was performed for springtime and the second for summertime. Results show that a spin-up period of ten days commonly used in regional-scale applications may not be sufficient to reduce the effects of initial conditions to less than 1% of seasonally-averaged surface ozone concentrations everywhere while 20 days were found to be sufficient for the entire domain for the spring case and almost the entire domain for the summer case. For the summer case, differences were found to persist longer aloft due to circulation of air masses and even a spin-up period of 30 days was not sufficient to reduce the effects of ICs to less than 1% of seasonally-averaged layer 34 ozone concentrations over the southwestern portion of the modeling domain. Analysis of the effect of soil initial conditions for the CMAQ bidirectional NH3 exchange model shows that during springtime they can have an important effect on simulated inorganic aerosols concentrations for time periods of one month or longer. The effects are less pronounced during other seasons. The results, while specific to the modeling domain and time periods simulated here, suggest that modeling protocols need to be scrutinized for a given application and that it cannot be assum

  18. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  19. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  20. Accelerated Monte Carlo Simulation on the Chemical Stage in Water Radiolysis using GPU

    PubMed Central

    Tian, Zhen; Jiang, Steve B.; Jia, Xun

    2018-01-01

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2. PMID:28323637

  1. Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang, Steve B.; Jia, Xun

    2017-04-01

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.

  2. Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU.

    PubMed

    Tian, Zhen; Jiang, Steve B; Jia, Xun

    2017-04-21

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.

  3. Hybrid neuro-heuristic methodology for simulation and control of dynamic systems over time interval.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2017-09-01

    Simulation and positioning are very important aspects of computer aided engineering. To process these two, we can apply traditional methods or intelligent techniques. The difference between them is in the way they process information. In the first case, to simulate an object in a particular state of action, we need to perform an entire process to read values of parameters. It is not very convenient for objects for which simulation takes a long time, i.e. when mathematical calculations are complicated. In the second case, an intelligent solution can efficiently help on devoted way of simulation, which enables us to simulate the object only in a situation that is necessary for a development process. We would like to present research results on developed intelligent simulation and control model of electric drive engine vehicle. For a dedicated simulation method based on intelligent computation, where evolutionary strategy is simulating the states of the dynamic model, an intelligent system based on devoted neural network is introduced to control co-working modules while motion is in time interval. Presented experimental results show implemented solution in situation when a vehicle transports things over area with many obstacles, what provokes sudden changes in stability that may lead to destruction of load. Therefore, applied neural network controller prevents the load from destruction by positioning characteristics like pressure, acceleration, and stiffness voltage to absorb the adverse changes of the ground. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  5. A full simulation of the Quetzal echo at the Mayan pyramid of Kukulkan at Chichen Itza in Mexico

    NASA Astrophysics Data System (ADS)

    Declercq, Nico F.; Degrieck, Joris; Briers, Rudy; Leroy, Oswald

    2003-04-01

    It is well known that a handclap in front of the staircase of the pyramid produces an echo that sounds similar to the chirp of the Quetzal bird. This phenomenon occurs due to diffraction. There exist some publications concerning this phenomenon and even some first attempts are reported to simulate it. However, no full simulation (amplitude, frequency, time) has ever been reported before. The present work presents a simulation which is based on the theory of the diffraction of plane waves and which takes into account continuity conditions. The latter theory is the building block for an extended theory that tackles the diffraction of a spherical sound pulse. By means of these principles it is possible to entirely simulate the echo following a handclap in front of the staircase. [Work supported by The Flemish Institute for the Encouragement of the Scientific and Technological Research in Industry (I.W.T.)

  6. Runtime Speculative Software-Only Fault Tolerance

    DTIC Science & Technology

    2012-06-01

    reliability of RSFT, a in-depth analysis on its window of vulnerability is also discussed and measured via simulated fault injection. The performance...propagation of faults through the entire program. For optimal performance, these techniques have to use herotic alias analysis to find the minimum set of...affect program output. No program source code or alias analysis is needed to analyze the fault propagation ahead of time. 2.3 Limitations of Existing

  7. Market-Based and System-Wide Fuel Cycle Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Paul Philip Hood; Scopatz, Anthony; Gidden, Matthew

    This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.

  8. SIMNET: an insider's perspective

    NASA Astrophysics Data System (ADS)

    Cosby, L. Neale

    1995-04-01

    Simulator Networking (SIMNET) began with a young scientist's idea but has ended up changing an entire industry and the way the military does business. And the story isn't over yet. SIMNET began as an advanced research project aimed at developing a core technology for networking hundreds of affordable simulators worldwide in real time to practice joint collective warfighting skills and to develop better acquisition practices. It was a daring project that proved the Advanced Research Projects Agency (ARPA) mission of doing "what cannot be done." It was a serious threat to the existing simulation industry. As it turned out, the government got what it wanted—a low-cost, high-performance virtual simulation capability that could be proliferated like consumer electronics. This paper provides an insider's view of the program history, identifies some possible lessons for future developers, and opines future growth for SIMNET technology.

  9. Study of Ion Beam Forming Process in Electric Thruster Using 3D FEM Simulation

    NASA Astrophysics Data System (ADS)

    Huang, Tao; Jin, Xiaolin; Hu, Quan; Li, Bin; Yang, Zhonghai

    2015-11-01

    There are two algorithms to simulate the process of ion beam forming in electric thruster. The one is electrostatic steady state algorithm. Firstly, an assumptive surface, which is enough far from the accelerator grids, launches the ion beam. Then the current density is calculated by theory formula. Secondly these particles are advanced one by one according to the equations of the motions of ions until they are out of the computational region. Thirdly, the electrostatic potential is recalculated and updated by solving Poisson Equation. At the end, the convergence is tested to determine whether the calculation should continue. The entire process will be repeated until the convergence is reached. Another one is time-depended PIC algorithm. In a global time step, we assumed that some new particles would be produced in the simulation domain and its distribution of position and velocity were certain. All of the particles that are still in the system will be advanced every local time steps. Typically, we set the local time step low enough so that the particle needs to be advanced about five times to move the distance of the edge of the element in which the particle is located.

  10. Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B

    2011-01-01

    In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less

  11. Evacuation simulation with consideration of obstacle removal and using game theory

    NASA Astrophysics Data System (ADS)

    Lin, Guan-Wen; Wong, Sai-Keung

    2018-06-01

    In this paper, we integrate a cellular automaton model with game theory to simulate crowd evacuation from a room with consideration of obstacle removal. The room has one or more exits, one of which is blocked by obstacles. The obstacles at the exit can be removed by volunteers. We investigate the cooperative and defective behaviors of pedestrians during evacuation. The yielder game and volunteer's dilemma game are employed to resolve interpedestrian conflict. An anticipation floor field is proposed to guide the pedestrians to avoid obstacles that are being removed. We conducted experiments to determine how a variety of conditions affect overall crowd evacuation and volunteer evacuation times. The conditions were the start time of obstacle removal, number of obstacles, placement of obstacles, time spent in obstacle removal, strength of the anticipation floor field, and obstacle visibility distance. We demonstrate how reciprocity can be achieved among pedestrians and increases the efficiency of the entire evacuation process.

  12. Consistent View of Protein Fluctuations from All-Atom Molecular Dynamics and Coarse-Grained Dynamics with Knowledge-Based Force-Field.

    PubMed

    Jamroz, Michal; Orozco, Modesto; Kolinski, Andrzej; Kmiecik, Sebastian

    2013-01-08

    It is widely recognized that atomistic Molecular Dynamics (MD), a classical simulation method, captures the essential physics of protein dynamics. That idea is supported by a theoretical study showing that various MD force-fields provide a consensus picture of protein fluctuations in aqueous solution [Rueda, M. et al. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 796-801]. However, atomistic MD cannot be applied to most biologically relevant processes due to its limitation to relatively short time scales. Much longer time scales can be accessed by properly designed coarse-grained models. We demonstrate that the aforementioned consensus view of protein dynamics from short (nanosecond) time scale MD simulations is fairly consistent with the dynamics of the coarse-grained protein model - the CABS model. The CABS model employs stochastic dynamics (a Monte Carlo method) and a knowledge-based force-field, which is not biased toward the native structure of a simulated protein. Since CABS-based dynamics allows for the simulation of entire folding (or multiple folding events) in a single run, integration of the CABS approach with all-atom MD promises a convenient (and computationally feasible) means for the long-time multiscale molecular modeling of protein systems with atomistic resolution.

  13. Rheological behavior of the crust and mantle in subduction zones in the time-scale range from earthquake (minute) to mln years inferred from thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to earthquake scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous earthquake sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) earthquakes. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-4year time range.

  14. Femtosecond soliton source with fast and broad spectral tunability.

    PubMed

    Masip, Martin E; Rieznik, A A; König, Pablo G; Grosz, Diego F; Bragas, Andrea V; Martinez, Oscar E

    2009-03-15

    We present a complete set of measurements and numerical simulations of a femtosecond soliton source with fast and broad spectral tunability and nearly constant pulse width and average power. Solitons generated in a photonic crystal fiber, at the low-power coupling regime, can be tuned in a broad range of wavelengths, from 850 to 1200 nm using the input power as the control parameter. These solitons keep almost constant time duration (approximately 40 fs) and spectral widths (approximately 20 nm) over the entire measured spectra regardless of input power. Our numerical simulations agree well with measurements and predict a wide working wavelength range and robustness to input parameters.

  15. Insights into the Tunnel Mechanism of Cholesteryl Ester Transfer Protein through All-atom Molecular Dynamics Simulations

    DOE PAGES

    Lei, Dongsheng; Rames, Matthew; Zhang, Xing; ...

    2016-05-03

    Cholesteryl ester transfer protein (CETP) mediates cholesteryl ester (CE) transfer from the atheroprotective high density lipoprotein (HDL) cholesterol to the atherogenic low density lipoprotein cholesterol. In the past decade, this property has driven the development of CETP inhibitors, which have been evaluated in large scale clinical trials for treating cardiovascular diseases. Despite the pharmacological interest, little is known about the fundamental mechanism of CETP in CE transfer. Recent electron microscopy (EM) experiments have suggested a tunnel mechanism, and molecular dynamics simulations have shown that the flexible N-terminal distal end of CETP penetrates into the HDL surface and takes up amore » CE molecule through an open pore. However, it is not known whether a CE molecule can completely transfer through an entire CETP molecule. Here, we used all-atom molecular dynamics simulations to evaluate this possibility. The results showed that a hydrophobic tunnel inside CETP is sufficient to allow a CE molecule to completely transfer through the entire CETP within a predicted transfer time and at a rate comparable with those obtained through physiological measurements. Analyses of the detailed interactions revealed several residues that might be critical for CETP function, which may provide important clues for the effective development of CETP inhibitors and treatment of cardiovascular diseases.« less

  16. Collaborative simulation method with spatiotemporal synchronization process control

    NASA Astrophysics Data System (ADS)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  17. Non-monotonic dynamics of water in its binary mixture with 1,2-dimethoxy ethane: A combined THz spectroscopic and MD simulation study.

    PubMed

    Das Mahanta, Debasish; Patra, Animesh; Samanta, Nirnay; Luong, Trung Quan; Mukherjee, Biswaroop; Mitra, Rajib Kumar

    2016-10-28

    A combined experimental (mid- and far-infrared FTIR spectroscopy and THz time domain spectroscopy (TTDS) (0.3-1.6 THz)) and molecular dynamics (MD) simulation technique are used to understand the evolution of the structure and dynamics of water in its binary mixture with 1,2-dimethoxy ethane (DME) over the entire concentration range. The cooperative hydrogen bond dynamics of water obtained from Debye relaxation of TTDS data reveals a non-monotonous behaviour in which the collective dynamics is much faster in the low X w region (where X w is the mole fraction of water in the mixture), whereas in X w ∼ 0.8 region, the dynamics gets slower than that of pure water. The concentration dependence of the reorientation times of water, calculated from the MD simulations, also captures this non-monotonous character. The MD simulation trajectories reveal presence of large amplitude angular jumps, which dominate the orientational relaxation. We rationalize the non-monotonous, concentration dependent orientational dynamics by identifying two different physical mechanisms which operate at high and low water concentration regimes.

  18. Problem-Solving in the Pre-Clinical Curriculum: The Uses of Computer Simulations.

    ERIC Educational Resources Information Center

    Michael, Joel A.; Rovick, Allen A.

    1986-01-01

    Promotes the use of computer-based simulations in the pre-clinical medical curriculum as a means of providing students with opportunities for problem solving. Describes simple simulations of skeletal muscle loads, complex simulations of major organ systems and comprehensive simulation models of the entire human body. (TW)

  19. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    DTIC Science & Technology

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  20. Accelerating Molecular Dynamic Simulation on Graphics Processing Units

    PubMed Central

    Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.

    2009-01-01

    We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337

  1. A Three-Dimensional Parallel Time-Accurate Turbopump Simulation Procedure Using Overset Grid System

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2002-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and nonuniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability are presented along with the performance of parallel versions of the code.

  2. Long Dynamics Simulations of Proteins Using Atomistic Force Fields and a Continuum Representation of Solvent Effects: Calculation of Structural and Dynamic Properties

    PubMed Central

    Li, Xianfeng; Hassan, Sergio A.; Mehler, Ernest L.

    2006-01-01

    Long dynamics simulations were carried out on the B1 immunoglobulin-binding domain of streptococcal protein G (ProtG) and bovine pancreatic trypsin inhibitor (BPTI) using atomistic descriptions of the proteins and a continuum representation of solvent effects. To mimic frictional and random collision effects, Langevin dynamics (LD) were used. The main goal of the calculations was to explore the stability of tens-of-nanosecond trajectories as generated by this molecular mechanics approximation and to analyze in detail structural and dynamical properties. Conformational fluctuations, order parameters, cross correlation matrices, residue solvent accessibilities, pKa values of titratable groups, and hydrogen-bonding (HB) patterns were calculated from all of the trajectories and compared with available experimental data. The simulations comprised over 40 ns per trajectory for ProtG and over 30 ns per trajectory for BPTI. For comparison, explicit water molecular dynamics simulations (EW/MD) of 3 ns and 4 ns, respectively, were also carried out. Two continuum simulations were performed on each protein using the CHARMM program, one with the all-atom PAR22 representation of the protein force field (here referred to as PAR22/LD simulations) and the other with the modifications introduced by the recently developed CMAP potential (CMAP/LD simulations). The explicit solvent simulations were performed with PAR22 only. Solvent effects are described by a continuum model based on screened Coulomb potentials (SCP) reported earlier, i.e., the SCP-based implicit solvent model (SCP–ISM). For ProtG, both the PAR22/LD and the CMAP/LD 40-ns trajectories were stable, yielding Cα root mean square deviations (RMSD) of about 1.0 and 0.8 Å respectively along the entire simulation time, compared to 0.8 Å for the EW/MD simulation. For BPTI, only the CMAP/LD trajectory was stable for the entire 30-ns simulation, with a Cα RMSD of ≈ 1.4 Å, while the PAR22/LD trajectory became unstable early in the simulation, reaching a Cα RMSD of about 2.7 Å and remaining at this value until the end of the simulation; the Cα RMSD of the EW/MD simulation was about 1.5 Å. The source of the instabilities of the BPTI trajectories in the PAR22/LD simulations was explored by an analysis of the backbone torsion angles. To further validate the findings from this analysis of BPTI, a 35-ns SCP–ISM simulation of Ubiquitin (Ubq) was carried out. For this protein, the CMAP/LD simulation was stable for the entire simulation time (Cα RMSD of ≈1.0 Å), while the PAR22/LD trajectory showed a trend similar to that in BPTI, reaching a Cα RMSD of ≈1.5 Å at 7 ns. All the calculated properties were found to be in agreement with the corresponding experimental values, although local deviations were also observed. HB patterns were also well reproduced by all the continuum solvent simulations with the exception of solvent-exposed side chain–side chain (sc–sc) HB in ProtG, where several of the HB interactions observed in the crystal structure and in the EW/MD simulation were lost. The overall analysis reported in this work suggests that the combination of an atomistic representation of a protein with a CMAP/CHARMM force field and a continuum representation of solvent effects such as the SCP–ISM provides a good description of structural and dynamic properties obtained from long computer simulations. Although the SCP–ISM simulations (CMAP/LD) reported here were shown to be stable and the properties well reproduced, further refinement is needed to attain a level of accuracy suitable for more challenging biological applications, particularly the study of protein–protein interactions. PMID:15959866

  3. Effects of simulator motion and visual characteristics on rotorcraft handling qualities evaluations

    NASA Technical Reports Server (NTRS)

    Mitchell, David G.; Hart, Daniel C.

    1993-01-01

    The pilot's perceptions of aircraft handling qualities are influenced by a combination of the aircraft dynamics, the task, and the environment under which the evaluation is performed. When the evaluation is performed in a groundbased simulator, the characteristics of the simulation facility also come into play. Two studies were conducted on NASA Ames Research Center's Vertical Motion Simulator to determine the effects of simulator characteristics on perceived handling qualities. Most evaluations were conducted with a baseline set of rotorcraft dynamics, using a simple transfer-function model of an uncoupled helicopter, under different conditions of visual time delays and motion command washout filters. Differences in pilot opinion were found as the visual and motion parameters were changed, reflecting a change in the pilots' perceptions of handling qualities, rather than changes in the aircraft model itself. The results indicate a need for tailoring the motion washout dynamics to suit the task. Visual-delay data are inconclusive but suggest that it may be better to allow some time delay in the visual path to minimize the mismatch between visual and motion, rather than eliminate the visual delay entirely through lead compensation.

  4. Taking on the doctor role in whole-task simulation.

    PubMed

    Bartlett, Maggie; Gay, Simon P; Kinston, Ruth; McKinley, Robert

    2018-06-01

    Untimed simulated primary care consultations focusing on safe and effective clinical outcomes were first introduced into undergraduate medical education in Otago, New Zealand, in 2004. We extended this concept and included a secondary care version for final-year students. We offer students opportunities to manage entire consultations, which include making and implementing clinical decisions with simulated patients (SPs). Formative feedback is given by SPs on the achievement of pre-determined outcomes and by faculty members on clinical decision making, medical record keeping and case presentation. We explored students' perceptions of the educational value of the sessions using post-session questionnaires (n = 194) and focus groups (n = 36 participants overall). Students are offered opportunities to manage entire consultations with simulated patients RESULTS: Students perceived that the sessions were useful, enjoyable and relevant to early postgraduate practice. They identified useful learning in time management, communication, decision making, prescribing and managing uncertainty. Students identified gaps in their knowledge and recognised that they had been offered opportunities to develop decision-making skills by having to take responsibility for whole consultations and all the decisions included within them. Most students reported positive impacts on learning, although a small minority reported negative impacts on their perceptions of their ability to cope as a junior doctor. These simulated consultation sessions appear to lead to the effective learning of a range of skills that students need in order to work as junior doctors. Facilitators leading such sessions must be alert to the possibility of educational harm arising from such simulations, and the need to address this during the debriefing. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  5. Detailed Comparison of DNS to PSE for Oblique Breakdown at Mach 3

    NASA Technical Reports Server (NTRS)

    Mayer, Christian S. J.; Fasel, Hermann F.; Choudhari, Meelan; Chang, Chau-Lyan

    2010-01-01

    A pair of oblique waves at low amplitudes is introduced in a supersonic flat-plate boundary layer. Their downstream development and the concomitant process of laminar to turbulent transition is then investigated numerically using Direct Numerical Simulations (DNS) and Parabolized Stability Equations (PSE). This abstract is the last part of an extensive study of the complete transition process initiated by oblique breakdown at Mach 3. In contrast to the previous simulations, the symmetry condition in the spanwise direction is removed for the simulation presented in this abstract. By removing the symmetry condition, we are able to confirm that the flow is indeed symmetric over the entire computational domain. Asymmetric modes grow in the streamwise direction but reach only small amplitude values at the outflow. Furthermore, this abstract discusses new time-averaged data from our previous simulation CASE 3 and compares PSE data obtained from NASA's LASTRAC code to DNS results.

  6. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the upper plate during multiple earthquake cycles at times of hundred thousand and million years and discuss effect of great earthquakes in changing long-term stress field in the upper plate.

  7. Rapid Optimization of External Quantum Efficiency of Thin Film Solar Cells Using Surrogate Modeling of Absorptivity.

    PubMed

    Kaya, Mine; Hajimirza, Shima

    2018-05-25

    This paper uses surrogate modeling for very fast design of thin film solar cells with improved solar-to-electricity conversion efficiency. We demonstrate that the wavelength-specific optical absorptivity of a thin film multi-layered amorphous-silicon-based solar cell can be modeled accurately with Neural Networks and can be efficiently approximated as a function of cell geometry and wavelength. Consequently, the external quantum efficiency can be computed by averaging surrogate absorption and carrier recombination contributions over the entire irradiance spectrum in an efficient way. Using this framework, we optimize a multi-layer structure consisting of ITO front coating, metallic back-reflector and oxide layers for achieving maximum efficiency. Our required computation time for an entire model fitting and optimization is 5 to 20 times less than the best previous optimization results based on direct Finite Difference Time Domain (FDTD) simulations, therefore proving the value of surrogate modeling. The resulting optimization solution suggests at least 50% improvement in the external quantum efficiency compared to bare silicon, and 25% improvement compared to a random design.

  8. Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method

    NASA Astrophysics Data System (ADS)

    Taitano, William; Knoll, Dana; Chacon, Luis

    2009-11-01

    The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO

  9. Phase transitions between lower and higher level management learning in times of crisis: an experimental study based on synergetics.

    PubMed

    Liening, Andreas; Strunk, Guido; Mittelstadt, Ewald

    2013-10-01

    Much has been written about the differences between single- and double-loop learning, or more general between lower level and higher level learning. Especially in times of a fundamental crisis, a transition between lower and higher level learning would be an appropriate reaction to a challenge coming entirely out of the dark. However, so far there is no quantitative method to monitor such a transition. Therefore we introduce theory and methods of synergetics and present results from an experimental study based on the simulation of a crisis within a business simulation game. Hypothesized critical fluctuations - as a marker for so-called phase transitions - have been assessed with permutation entropy. Results show evidence for a phase transition during the crisis, which can be interpreted as a transition between lower and higher level learning.

  10. Equilibration of experimentally determined protein structures for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Walton, Emily B.; Vanvliet, Krystyn J.

    2006-12-01

    Preceding molecular dynamics simulations of biomolecular interactions, the molecule of interest is often equilibrated with respect to an initial configuration. This so-called equilibration stage is required because the input structure is typically not within the equilibrium phase space of the simulation conditions, particularly in systems as complex as proteins, which can lead to artifactual trajectories of protein dynamics. The time at which nonequilibrium effects from the initial configuration are minimized—what we will call the equilibration time—marks the beginning of equilibrium phase-space exploration. Note that the identification of this time does not imply exploration of the entire equilibrium phase space. We have found that current equilibration methodologies contain ambiguities that lead to uncertainty in determining the end of the equilibration stage of the trajectory. This results in equilibration times that are either too long, resulting in wasted computational resources, or too short, resulting in the simulation of molecular trajectories that do not accurately represent the physical system. We outline and demonstrate a protocol for identifying the equilibration time that is based on the physical model of Normal Mode Analysis. We attain the computational efficiency required of large-protein simulations via a stretched exponential approximation that enables an analytically tractable and physically meaningful form of the root-mean-square deviation of atoms comprising the protein. We find that the fitting parameters (which correspond to physical properties of the protein) fluctuate initially but then stabilize for increased simulation time, independently of the simulation duration or sampling frequency. We define the end of the equilibration stage—and thus the equilibration time—as the point in the simulation when these parameters attain constant values. Compared to existing methods, our approach provides the objective identification of the time at which the simulated biomolecule has entered an energetic basin. For the representative protein considered, bovine pancreatic trypsin inhibitor, existing methods indicate a range of 0.2-10ns of simulation until a local minimum is attained. Our approach identifies a substantially narrower range of 4.5-5.5ns , which will lead to a much more objective choice of equilibration time.

  11. Using the entire history in the analysis of nested case cohort samples.

    PubMed

    Rivera, C L; Lumley, T

    2016-08-15

    Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Monte Carlo Simulation of Sudden Death Bearing Testing

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.

  13. Numerical integration of detector response functions via Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  14. Numerical integration of detector response functions via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.

    2017-09-01

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.

  15. Numerical integration of detector response functions via Monte Carlo simulations

    DOE PAGES

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...

    2017-06-13

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  16. Methodology Development of Computationally-Efficient Full Vehicle Simulations for the Entire Blast Event

    DTIC Science & Technology

    2015-08-06

    Philip Kosarek1, Julien Santini1, Ravi Thyagarajan2 1 Altair Product Design, Inc., Troy, MI 2US Army TARDEC, Warren, MI This is a reprint...is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 06

  17. Challenge toward the prediction of typhoon behaviour and down pour

    NASA Astrophysics Data System (ADS)

    Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.

    2013-08-01

    Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.

  18. Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.

    PubMed

    Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime

    2017-10-01

    Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.

  19. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  20. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    NASA Astrophysics Data System (ADS)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; Derome, Dominique; Carmeliet, Jan

    2018-03-01

    An entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace's law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results. Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.

  1. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE PAGES

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; ...

    2018-03-22

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  2. Testing new approaches to carbonate system simulation at the reef scale: the ReefSam model first results, application to a question in reef morphology and future challenges.

    NASA Astrophysics Data System (ADS)

    Barrett, Samuel; Webster, Jody

    2016-04-01

    Numerical simulation of the stratigraphy and sedimentology of carbonate systems (carbonate forward stratigraphic modelling - CFSM) provides significant insight into the understanding of both the physical nature of these systems and the processes which control their development. It also provides the opportunity to quantitatively test conceptual models concerning stratigraphy, sedimentology or geomorphology, and allows us to extend our knowledge either spatially (e.g. between bore holes) or temporally (forwards or backwards in time). The later is especially important in determining the likely future development of carbonate systems, particularly regarding the effects of climate change. This application, by its nature, requires successful simulation of carbonate systems on short time scales and at high spatial resolutions. Previous modelling attempts have typically focused on the scales of kilometers and kilo-years or greater (the scale of entire carbonate platforms), rather than at the scale of centuries or decades, and tens to hundreds of meters (the scale of individual reefs). Previous work has identified limitations in common approaches to simulating important reef processes. We present a new CFSM, Reef Sedimentary Accretion Model (ReefSAM), which is designed to test new approaches to simulating reef-scale processes, with the aim of being able to better simulate the past and future development of coral reefs. Four major features have been tested: 1. A simulation of wave based hydrodynamic energy with multiple simultaneous directions and intensities including wave refraction, interaction, and lateral sheltering. 2. Sediment transport simulated as sediment being moved from cell to cell in an iterative fashion until complete deposition. 3. A coral growth model including consideration of local wave energy and composition of the basement substrate (as well as depth). 4. A highly quantitative model testing approach where dozens of output parameters describing the reef morphology and development are compared with observational data. Despite being a test-bed and work in progress, ReefSAM was able to simulate the Holocene development of One Tree Reef in the Southern Great Barrier Reef (Australia) and was able to improve upon previous modelling attempts in terms of both quantitative measures and qualitative outputs, such as the presence of previously un-simulated reef features. Given the success of the model in simulating the Holocene development of OTR, we used it to quantitatively explore the effect of basement substrate depth and morphology on reef maturity/lagoonal filling (as discussed by Purdy and Gischer 2005). Initial results show a number of non-linear relationships between basement substrate depth, lagoonal filling and volume of sand produced on the reef rims and deposited in the lagoon. Lastly, further testing of the model has revealed new challenges which are likely to manifest in any attempt at reef-scale simulation. Subtly different sets of energy direction and magnitude input parameters (different in each time step but with identical probability distributions across the entire model run) resulted in a wide range of quantitative model outputs. Time step length is a likely contributing factor and the results of further testing to address this challenge will be presented.

  3. Consistency Between Convection Allowing Model Output and Passive Microwave Satellite Observations

    NASA Astrophysics Data System (ADS)

    Bytheway, J. L.; Kummerow, C. D.

    2018-01-01

    Observations from the Global Precipitation Measurement (GPM) core satellite were used along with precipitation forecasts from the High Resolution Rapid Refresh (HRRR) model to assess and interpret differences between observed and modeled storms. Using a feature-based approach, precipitating objects were identified in both the National Centers for Environmental Prediction Stage IV multisensor precipitation product and HRRR forecast at lead times of 1, 2, and 3 h at valid times corresponding to GPM overpasses. Precipitating objects were selected for further study if (a) the observed feature occurred entirely within the swath of the GPM Microwave Imager (GMI) and (b) the HRRR model predicted it at all three forecast lead times. Output from the HRRR model was used to simulate microwave brightness temperatures (Tbs), which were compared to those observed by the GMI. Simulated Tbs were found to have biases at both the warm and cold ends of the distribution, corresponding to the stratiform/anvil and convective areas of the storms, respectively. Several experiments altered both the simulation microphysics and hydrometeor classification in order to evaluate potential shortcomings in the model's representation of precipitating clouds. In general, inconsistencies between observed and simulated brightness temperatures were most improved when transferring snow water content to supercooled liquid hydrometeor classes.

  4. BEM-based simulation of lung respiratory deformation for CT-guided biopsy.

    PubMed

    Chen, Dong; Chen, Weisheng; Huang, Lipeng; Feng, Xuegang; Peters, Terry; Gu, Lixu

    2017-09-01

    Accurate and real-time prediction of the lung and lung tumor deformation during respiration are important considerations when performing a peripheral biopsy procedure. However, most existing work focused on offline whole lung simulation using 4D image data, which is not applicable in real-time image-guided biopsy with limited image resources. In this paper, we propose a patient-specific biomechanical model based on the boundary element method (BEM) computed from CT images to estimate the respiration motion of local target lesion region, vessel tree and lung surface for the real-time biopsy guidance. This approach applies pre-computation of various BEM parameters to facilitate the requirement for real-time lung motion simulation. The resulting boundary condition at end inspiratory phase is obtained using a nonparametric discrete registration with convex optimization, and the simulation of the internal tissue is achieved by applying a tetrahedron-based interpolation method depend on expert-determined feature points on the vessel tree model. A reference needle is tracked to update the simulated lung motion during biopsy guidance. We evaluate the model by applying it for respiratory motion estimations of ten patients. The average symmetric surface distance (ASSD) and the mean target registration error (TRE) are employed to evaluate the proposed model. Results reveal that it is possible to predict the lung motion with ASSD of [Formula: see text] mm and a mean TRE of [Formula: see text] mm at largest over the entire respiratory cycle. In the CT-/electromagnetic-guided biopsy experiment, the whole process was assisted by our BEM model and final puncture errors in two studies were 3.1 and 2.0 mm, respectively. The experiment results reveal that both the accuracy of simulation and real-time performance meet the demands of clinical biopsy guidance.

  5. Determination of Vertical Borehole and Geological Formation Properties using the Crossed Contour Method.

    PubMed

    Leyde, Brian P; Klein, Sanford A; Nellis, Gregory F; Skye, Harrison

    2017-03-01

    This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model.

  6. The EMIR experience in the use of software control simulators to speed up the time to telescope

    NASA Astrophysics Data System (ADS)

    Lopez Ramos, Pablo; López-Ruiz, J. C.; Moreno Arce, Heidy; Rosich, Josefina; Perez Menor, José Maria

    2012-09-01

    One of the main problems facing development teams working on instrument control systems consists on the need to access mechanisms which are not available until well into the integration phase. The need to work with real hardware creates additional problems like, among others: certain faults cannot be tested due to the possibility of hardware damage, taking the system to the limit may shorten its operational lifespan and the full system may not be available during some periods due to maintenance and/or testing of individual components. These problems can be treated with the use of simulators and by applying software/hardware standards. Since information on the construction and performance of electro-mechanical systems is available at relatively early stages of the project, simulators are developed in advance (before the existence of the mechanism) or, if conventions and standards have been correctly followed, a previously developed simulator might be used. This article describes our experience in building software simulators and the main advantages we have identified, which are: the control software can be developed even in the absence of real hardware, critical tests can be prepared using the simulated systems, test system behavior for hardware failure situations that represent a risk of the real system, and the speed up of in house integration of the entire instrument. The use of simulators allows us to reduce development, testing and integration time.

  7. Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process

    NASA Astrophysics Data System (ADS)

    Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.

    2018-06-01

    A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.

  8. Parsec-Scale Obscuring Accretion Disk with Large-Scale Magnetic Field in AGNs

    NASA Technical Reports Server (NTRS)

    Dorodnitsyn, A.; Kallman, T.

    2017-01-01

    A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc (parsec) -scale torus in AGNs (Active Galactic Nuclei). Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate that the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.

  9. Parsec-scale Obscuring Accretion Disk with Large-scale Magnetic Field in AGNs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorodnitsyn, A.; Kallman, T.

    A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc-scale torus in AGNs. Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate thatmore » the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.« less

  10. System Engineering Infrastructure Evolution Galileo IOV and the Steps Beyond

    NASA Astrophysics Data System (ADS)

    Eickhoff, J.; Herpel, H.-J.; Steinle, T.; Birn, R.; Steiner, W.-D.; Eisenmann, H.; Ludwig, T.

    2009-05-01

    The trends to more and more constrained financial budgets in satellite engineering require a permanent optimization of the S/C system engineering processes and infrastructure. Astrium in the recent years already has built up a system simulation infrastructure - the "Model-based Development & Verification Environment" - which meanwhile is well known all over Europe and is established as Astrium's standard approach for ESA, DLR projects and now even the EU/ESA-Project Galileo IOV. The key feature of the MDVE / FVE approach is to provide entire S/C simulation (with full featured OBC simulation) already in early phases to start OBSW code tests on a simulated S/C and to later add hardware in the loop step by step up to an entire "Engineering Functional Model (EFM)" or "FlatSat". The subsequent enhancements to this simulator infrastructure w.r.t. spacecraft design data handling are reported in the following sections.

  11. Catalytic Ignition and Upstream Reaction Propagation in Monolith Reactors

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Dietrich, Daniel L.; Miller, Fletcher J.; T'ien, James S.

    2007-01-01

    Using numerical simulations, this work demonstrates a concept called back-end ignition for lighting-off and pre-heating a catalytic monolith in a power generation system. In this concept, a downstream heat source (e.g. a flame) or resistive heating in the downstream portion of the monolith initiates a localized catalytic reaction which subsequently propagates upstream and heats the entire monolith. The simulations used a transient numerical model of a single catalytic channel which characterizes the behavior of the entire monolith. The model treats both the gas and solid phases and includes detailed homogeneous and heterogeneous reactions. An important parameter in the model for back-end ignition is upstream heat conduction along the solid. The simulations used both dry and wet CO chemistry as a model fuel for the proof-of-concept calculations; the presence of water vapor can trigger homogenous reactions, provided that gas-phase temperatures are adequately high and there is sufficient fuel remaining after surface reactions. With sufficiently high inlet equivalence ratio, back-end ignition occurs using the thermophysical properties of both a ceramic and metal monolith (coated with platinum in both cases), with the heat-up times significantly faster for the metal monolith. For lower equivalence ratios, back-end ignition occurs without upstream propagation. Once light-off and propagation occur, the inlet equivalence ratio could be reduced significantly while still maintaining an ignited monolith as demonstrated by calculations using complete monolith heating.

  12. Understanding the ignition mechanism of high-pressure spray flames

    DOE PAGES

    Dahms, Rainer N.; Paczko, Günter A.; Skeen, Scott A.; ...

    2016-10-25

    A conceptual model for turbulent ignition in high-pressure spray flames is presented. The model is motivated by first-principles simulations and optical diagnostics applied to the Sandia n-dodecane experiment. The Lagrangian flamelet equations are combined with full LLNL kinetics (2755 species; 11,173 reactions) to resolve all time and length scales and chemical pathways of the ignition process at engine-relevant pressures and turbulence intensities unattainable using classic DNS. The first-principles value of the flamelet equations is established by a novel chemical explosive mode-diffusion time scale analysis of the fully-coupled chemical and turbulent time scales. Contrary to conventional wisdom, this analysis reveals thatmore » the high Damköhler number limit, a key requirement for the validity of the flamelet derivation from the reactive Navier–Stokes equations, applies during the entire ignition process. Corroborating Rayleigh-scattering and formaldehyde PLIF with simultaneous schlieren imaging of mixing and combustion are presented. Our combined analysis establishes a characteristic temporal evolution of the ignition process. First, a localized first-stage ignition event consistently occurs in highest temperature mixture regions. This initiates, owed to the intense scalar dissipation, a turbulent cool flame wave propagating from this ignition spot through the entire flow field. This wave significantly decreases the ignition delay of lower temperature mixture regions in comparison to their homogeneous reference. This explains the experimentally observed formaldehyde formation across the entire spray head prior to high-temperature ignition which consistently occurs first in a broad range of rich mixture regions. There, the combination of first-stage ignition delay, shortened by the cool flame wave, and the subsequent delay until second-stage ignition becomes minimal. A turbulent flame subsequently propagates rapidly through the entire mixture over time scales consistent with experimental observations. As a result, we demonstrate that the neglect of turbulence-chemistry-interactions fundamentally fails to capture the key features of this ignition process.« less

  13. Tropospheric ozone in the western Pacific Rim: Analysis of satellite and surface-based observations along with comprehensive 3-D model simulations

    NASA Technical Reports Server (NTRS)

    Young, Sun-Woo; Carmichael, Gregory R.

    1994-01-01

    Tropospheric ozone production and transport in mid-latitude eastern Asia is studied. Data analysis of surface-based ozone measurements in Japan and satellite-based tropospheric column measurements of the entire western Pacific Rim are combined with results from three-dimensional model simulations to investigate the diurnal, seasonal and long-term variations of ozone in this region. Surface ozone measurements from Japan show distinct seasonal variation with a spring peak and summer minimum. Satellite studies of the entire tropospheric column of ozone show high concentrations in both the spring and summer seasons. Finally, preliminary model simulation studies show good agreement with observed values.

  14. Introduction of hypermatrix and operator notation into a discrete mathematics simulation model of malignant tumour response to therapeutic schemes in vivo. Some operator properties.

    PubMed

    Stamatakos, Georgios S; Dionysiou, Dimitra D

    2009-10-21

    The tremendous rate of accumulation of experimental and clinical knowledge pertaining to cancer dictates the development of a theoretical framework for the meaningful integration of such knowledge at all levels of biocomplexity. In this context our research group has developed and partly validated a number of spatiotemporal simulation models of in vivo tumour growth and in particular tumour response to several therapeutic schemes. Most of the modeling modules have been based on discrete mathematics and therefore have been formulated in terms of rather complex algorithms (e.g. in pseudocode and actual computer code). However, such lengthy algorithmic descriptions, although sufficient from the mathematical point of view, may render it difficult for an interested reader to readily identify the sequence of the very basic simulation operations that lie at the heart of the entire model. In order to both alleviate this problem and at the same time provide a bridge to symbolic mathematics, we propose the introduction of the notion of hypermatrix in conjunction with that of a discrete operator into the already developed models. Using a radiotherapy response simulation example we demonstrate how the entire model can be considered as the sequential application of a number of discrete operators to a hypermatrix corresponding to the dynamics of the anatomic area of interest. Subsequently, we investigate the operators' commutativity and outline the "summarize and jump" strategy aiming at efficiently and realistically address multilevel biological problems such as cancer. In order to clarify the actual effect of the composite discrete operator we present further simulation results which are in agreement with the outcome of the clinical study RTOG 83-02, thus strengthening the reliability of the model developed.

  15. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039

  16. A Three Dimensional Parallel Time Accurate Turbopump Simulation Procedure Using Overset Grid Systems

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2001-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and non-uniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.

  17. Persistence of initial conditions in continental scale air quality simulations

    NASA Astrophysics Data System (ADS)

    Hogrefe, Christian; Roselle, Shawn J.; Bash, Jesse O.

    2017-07-01

    This study investigates the effect of initial conditions (IC) for pollutant concentrations in the atmosphere and soil on simulated air quality for two continental-scale Community Multiscale Air Quality (CMAQ) model applications. One of these applications was performed for springtime and the second for summertime. Results show that a spin-up period of ten days commonly used in regional-scale applications may not be sufficient to reduce the effects of initial conditions to less than 1% of seasonally-averaged surface ozone concentrations everywhere while 20 days were found to be sufficient for the entire domain for the spring case and almost the entire domain for the summer case. For the summer case, differences were found to persist longer aloft due to circulation of air masses and even a spin-up period of 30 days was not sufficient to reduce the effects of ICs to less than 1% of seasonally-averaged layer 34 ozone concentrations over the southwestern portion of the modeling domain. Analysis of the effect of soil initial conditions for the CMAQ bidirectional NH3 exchange model shows that during springtime they can have an important effect on simulated inorganic aerosols concentrations for time periods of one month or longer. The effects are less pronounced during other seasons. The results, while specific to the modeling domain and time periods simulated here, suggest that modeling protocols need to be scrutinized for a given application and that it cannot be assumed that commonly-used spin-up periods are necessarily sufficient to reduce the effects of initial conditions on model results to an acceptable level. What constitutes an acceptable level of difference cannot be generalized and will depend on the particular application, time period and species of interest. Moreover, as the application of air quality models is being expanded to cover larger geographical domains and as these models are increasingly being coupled with other modeling systems to better represent air-surface-water exchanges, the effects of model initialization in such applications needs to be studied in future work.

  18. Extended-Range High-Resolution Dynamical Downscaling over a Continental-Scale Domain

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    High-resolution mesoscale simulations, when applied for downscaling meteorological fields over large spatial domains and for extended time periods, can provide valuable information for many practical application scenarios including the weather-dependent renewable energy industry. In the present study, a strategy has been proposed to dynamically downscale coarse-resolution meteorological fields from Environment Canada's regional analyses for a period of multiple years over the entire Canadian territory. The study demonstrates that a continuous mesoscale simulation over the entire domain is the most suitable approach in this regard. Large-scale deviations in the different meteorological fields pose the biggest challenge for extended-range simulations over continental scale domains, and the enforcement of the lateral boundary conditions is not sufficient to restrict such deviations. A scheme has therefore been developed to spectrally nudge the simulated high-resolution meteorological fields at the different model vertical levels towards those embedded in the coarse-resolution driving fields derived from the regional analyses. A series of experiments were carried out to determine the optimal nudging strategy including the appropriate nudging length scales, nudging vertical profile and temporal relaxation. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil-moisture, and snow conditions, towards their expected values obtained from a high-resolution offline surface scheme was also devised to limit any considerable deviation in the evolving surface fields due to extended-range temporal integrations. The study shows that ensuring large-scale atmospheric similarities helps to deliver near-surface statistical scores for temperature, dew point temperature and horizontal wind speed that are better or comparable to the operational regional forecasts issued by Environment Canada. Furthermore, the meteorological fields resulting from the proposed downscaling strategy have significantly improved spatiotemporal variance compared to those from the operational forecasts, and any time series generated from the downscaled fields do not suffer from discontinuities due to switching between the consecutive forecasts.

  19. DEVELOPMENT AND ANALYSIS OF AIR QUALITY MODELING SIMULATIONS FOR HAZARDOUS AIR POLLUTANTS

    EPA Science Inventory

    The concentrations of five hazardous air pollutants were simulated using the Community Multi Scale Air Quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results a...

  20. Modeling Reactive Transport of Strontium-90 in Heterogeneous, Variably Saturated Subsurface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Wang; Joan Q. Wu; Laurence C. Hull

    2010-08-01

    Sodium-bearing waste (SBW) containing high concentration of 90Sr was accidentally released to the vadose zone at the Idaho Nuclear Technology and Engineering Center, Idaho National Laboratory, Idaho Falls, ID, in 1972. To investigate the transport and fate of the 90Sr through this 137-m-thick, heterogeneous, variably saturated subsurface, we conducted a two-dimensional numerical modeling using TOUGHREACT under different assumed scenarios (low permeability of an entire interbed or just its surface) for the formation of perched water whose presence reflects the unique characteristics of the geologic materials and stratification at the study site. The results showed that different mechanisms could lead tomore » different flow geometries. The assumption of low permeability for the entire interbed led to the largest saturated zone area and the longest water travel time (55 vs. 43 or 44 yr in other scenarios) from the SBW leakage to the groundwater table. Simulated water travel time from different locations on the land surface to the groundwater aquifer varied from <30 to >80 yr. The results also indicated that different mechanisms may lead to differences in the peak and travel time of a small mobile fraction of Sr. The effective distribution coefficient and retardation factor for Sr2+ would change more than an order of magnitude for the same material during the 200-yr simulation period because of large changes in the concentrations of Sr2+ and competing ions. Understanding the migration rate of the mobile Sr2+ is necessary for designing long-term monitoring programs to detect it.« less

  1. Damage-Based Time-Dependent Modeling of Paraglacial to Postglacial Progressive Failure of Large Rock Slopes

    NASA Astrophysics Data System (ADS)

    Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.

    2018-01-01

    Large alpine rock slopes undergo long-term evolution in paraglacial to postglacial environments. Rock mass weakening and increased permeability associated with the progressive failure of deglaciated slopes promote the development of potentially catastrophic rockslides. We captured the entire life cycle of alpine slopes in one damage-based, time-dependent 2-D model of brittle creep, including deglaciation, damage-dependent fluid occurrence, and rock mass property upscaling. We applied the model to the Spriana rock slope (Central Alps), affected by long-term instability after Last Glacial Maximum and representing an active threat. We simulated the evolution of the slope from glaciated conditions to present day and calibrated the model using site investigation data and available temporal constraints. The model tracks the entire progressive failure path of the slope from deglaciation to rockslide development, without a priori assumptions on shear zone geometry and hydraulic conditions. Complete rockslide differentiation occurs through the transition from dilatant damage to a compacting basal shear zone, accounting for observed hydraulic barrier effects and perched aquifer formation. Our model investigates the mechanical role of deglaciation and damage-controlled fluid distribution in the development of alpine rockslides. The absolute simulated timing of rock slope instability development supports a very long "paraglacial" period of subcritical rock mass damage. After initial damage localization during the Lateglacial, rockslide nucleation initiates soon after the onset of Holocene, whereas full mechanical and hydraulic rockslide differentiation occurs during Mid-Holocene, supporting a key role of long-term damage in the reported occurrence of widespread rockslide clusters of these ages.

  2. General purpose molecular dynamics simulations fully implemented on graphics processing units

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Lorenz, Chris D.; Travesset, A.

    2008-05-01

    Graphics processing units (GPUs), originally developed for rendering real-time effects in computer games, now provide unprecedented computational power for scientific applications. In this paper, we develop a general purpose molecular dynamics code that runs entirely on a single GPU. It is shown that our GPU implementation provides a performance equivalent to that of fast 30 processor core distributed memory cluster. Our results show that GPUs already provide an inexpensive alternative to such clusters and discuss implications for the future.

  3. Study of the homogeneity of the current distribution in a dielectric barrier discharge in air by means of a segmented electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malashin, M. V., E-mail: m-malashin@mail.ru; Moshkunov, S. I.; Khomich, V. Yu.

    2016-02-15

    The current distribution in a dielectric barrier discharge in atmospheric-pressure air at a natural humidity of 40–60% was studied experimentally with a time resolution of 200 ps. The experimental results are interpreted by means of numerically simulating the discharge electric circuit. The obtained results indicate that the discharge operating in the volumetric mode develops simultaneously over the entire transverse cross section of the discharge gap.

  4. Process Control Migration of 50 LPH Helium Liquefier

    NASA Astrophysics Data System (ADS)

    Panda, U.; Mandal, A.; Das, A.; Behera, M.; Pal, Sandip

    2017-02-01

    Two helium liquefier/refrigerators are operational at VECC while one is dedicated for the Superconducting Cyclotron. The first helium liquefier of 50 LPH capacity from Air Liquide has already completed fifteen years of operation without any major trouble. This liquefier is being controlled by Eurotherm PC3000 make PLC. This PLC has become obsolete since last seven years or so. Though we can still manage to run the PLC system with existing spares, risk of discontinuation of the operation is always there due to unavailability of spare. In order to eliminate the risk, an equivalent PLC control system based on Siemens S7-300 was thought of. For smooth migration, total programming was done keeping the same field input and output interface, nomenclature and graphset. New program is a mix of S7-300 Graph, STL and LAD languages. One to one program verification of the entire process graph was done manually. The total program was run in simulation mode. Matlab mathematical model was also used for plant control simulations. EPICS based SCADA was used for process monitoring. As of now the entire hardware and software is ready for direct replacement with minimum required set up time.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Dongsheng; Rames, Matthew; Zhang, Xing

    Cholesteryl ester transfer protein (CETP) mediates cholesteryl ester (CE) transfer from the atheroprotective high density lipoprotein (HDL) cholesterol to the atherogenic low density lipoprotein cholesterol. In the past decade, this property has driven the development of CETP inhibitors, which have been evaluated in large scale clinical trials for treating cardiovascular diseases. Despite the pharmacological interest, little is known about the fundamental mechanism of CETP in CE transfer. Recent electron microscopy (EM) experiments have suggested a tunnel mechanism, and molecular dynamics simulations have shown that the flexible N-terminal distal end of CETP penetrates into the HDL surface and takes up amore » CE molecule through an open pore. However, it is not known whether a CE molecule can completely transfer through an entire CETP molecule. Here, we used all-atom molecular dynamics simulations to evaluate this possibility. The results showed that a hydrophobic tunnel inside CETP is sufficient to allow a CE molecule to completely transfer through the entire CETP within a predicted transfer time and at a rate comparable with those obtained through physiological measurements. Analyses of the detailed interactions revealed several residues that might be critical for CETP function, which may provide important clues for the effective development of CETP inhibitors and treatment of cardiovascular diseases.« less

  6. The Role of Nonlocal Heat Flow in Hohlraums

    NASA Astrophysics Data System (ADS)

    Town, R. P. J.; Short, R. W.; Verdon, C. P.; Afeyan, B. B.; Glenzer, S. H.; Suter, L. J.

    1997-11-01

    Glenzer,(Submitted to Physical Review Letters.)* using the Thomson scattering technique, has measured the time evolution of the electron temperature in scale-1 hohlraums. The measured peak electron temperature was 5 keV. Lasnex simulations, using a flux-limited Spitzer heat diffusion model with the standard sharp-cutoff flux limiter of 0.05, gave a peak electron temperature of only 3 keV. Good agreement between simulation and experiment was found when Lasnex simulations employed a time-varying flux limiter, which had a value of 0.01 when the main drive came on. The need to severly inhibit heat transport over the entire volume of hot plasma at late time suggests that nonlocal heat flow could be important in explaining these experimental observations. In this presentation we will report on Fokker--Planck calculations of idealized hohlraums and compare them to standard hydrodynamic calculations using flux-limited Spitzer heat flow. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC03-92SF19460. Also, work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

  7. 2D particle-in-cell simulation of the entire process of surface flashover on insulator in vacuum

    NASA Astrophysics Data System (ADS)

    Wang, Hongguang; Zhang, Jianwei; Li, Yongdong; Lin, Shu; Zhong, Pengfeng; Liu, Chunliang

    2018-04-01

    With the introduction of an external circuit model and a gas desorption model, the surface flashover on the plane insulator-vacuum interface perpendicular to parallel electrodes is simulated by a Particle-In-Cell method. It can be seen from simulations that when the secondary electron emission avalanche (SEEA) occurs, the current sharply increases because of the influence of the insulator surface charge on the cathode field emission. With the introduction of the gas desorption model, the current keeps on increasing after SEEA, and then the feedback of the external circuit causes the voltage between the two electrodes to decrease. The cathode emission current decreases, while the anode current keeps growing. With the definition that flashover occurs when the diode voltage drops by more than 20%, we obtained the simulated flashover voltage which agrees with the experimental value with the use of the field enhancement factor β = 145 and the gas molecule desorption coefficient γ=0.25 . From the simulation results, we can also see that the time delay of flashover decreases exponentially with voltage. In addition, from the gas desorption model, the gas density on the insulator surface is found to be proportional to the square of the gas desorption rate and linear with time.

  8. Traffic and Driving Simulator Based on Architecture of Interactive Motion.

    PubMed

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination.

  9. Traffic and Driving Simulator Based on Architecture of Interactive Motion

    PubMed Central

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711

  10. Mechanism of the αβ Conformational Change in F1-ATPase after ATP Hydrolysis: Free-Energy Simulations

    PubMed Central

    Ito, Yuko; Ikeguchi, Mitsunori

    2015-01-01

    One of the motive forces for F1-ATPase rotation is the conformational change of the catalytically active β subunit due to closing and opening motions caused by ATP binding and hydrolysis, respectively. The closing motion is accomplished in two steps: the hydrogen-bond network around ATP changes and then the entire structure changes via B-helix sliding, as shown in our previous study. Here, we investigated the opening motion induced by ATP hydrolysis using all-atom free-energy simulations, combining the nudged elastic band method and umbrella sampling molecular-dynamics simulations. Because hydrolysis requires residues in the α subunit, the simulations were performed with the αβ dimer. The results indicate that the large-scale opening motion is also achieved by the B-helix sliding (in the reverse direction). However, the sliding mechanism is different from that of ATP binding because sliding is triggered by separation of the hydrolysis products ADP and Pi. We also addressed several important issues: 1), the timing of the product Pi release; 2), the unresolved half-closed β structure; and 3), the ADP release mechanism. These issues are fundamental for motor function; thus, the rotational mechanism of the entire F1-ATPase is also elucidated through this αβ study. During the conformational change, conserved residues among the ATPase proteins play important roles, suggesting that the obtained mechanism may be shared with other ATPase proteins. When combined with our previous studies, these results provide a comprehensive view of the β-subunit conformational change that drives the ATPase. PMID:25564855

  11. The virtual morphology and the main movements of the human neck simulations used for car crash studies

    NASA Astrophysics Data System (ADS)

    Ciunel, St.; Tica, B.

    2016-08-01

    The paper presents the studies made on a similar biomechanical system composed by neck, head and thorax bones. The models were defined in a CAD environment which includes Adams algorithm for dynamic simulations. The virtual models and the entire morphology were obtained starting with CT images made on a living human subject. The main movements analyzed were: axial rotation (left-right), lateral bending (left-right) and flexion- extension movement. After simulation was obtained the entire biomechanical behavior based on data tables or diagrams. That virtual model composed by neck and head can be included in complex system (as a car system) and supposed to several impact simulations (virtual crash tests). Also, our research team built main components of a testing device for dummy car crash neck-head system using anatomical data.

  12. First Lunar Wake Passage of ARTEMIS: Discrimination of Wake Effects and Solar Wind Fluctuations by 3D Hybrid Simulations

    NASA Technical Reports Server (NTRS)

    Wiehle, S.; Plaschke, F.; Motschmann, U.; Glassmeier, K. H.; Auster, H. U.; Angelopoulos, V.; Mueller, J.; Kriegel, H.; Georgescu, E.; Halekas, J.; hide

    2011-01-01

    The spacecraft P1 of the new ARTEMIS (Acceleration, Reconnection, Turbulence, and Electrodynamics of the Moon's Interaction with the Sun) mission passed the lunar wake for the first time on February 13, 2010. We present magnetic field and plasma data of this event and results of 3D hybrid simulations. As the solar wind magnetic field was highly dynamic during the passage, a simulation with stationary solar wind input cannot distinguish whether distortions were caused by these solar wind variations or by the lunar wake; therefore, a dynamic real-time simulation of the flyby has been performed. The input values of this simulation are taken from NASA OMNI data and adapted to the P1 data, resulting in a good agreement between simulation and measurements. Combined with the stationary simulation showing non-transient lunar wake structures, a separation of solar wind and wake effects is achieved. An anisotropy in the magnitude of the plasma bulk flow velocity caused by a non-vanishing magnetic field component parallel to the solar wind flow and perturbations created by counterstreaming ions in the lunar wake are observed in data and simulations. The simulations help to interpret the data granting us the opportunity to examine the entire lunar plasma environment and, thus, extending the possibilities of measurements alone: A comparison of a simulation cross section to theoretical predictions of MHD wave propagation shows that all three basic MHD modes are present in the lunar wake and that their expansion governs the lunar wake refilling process.

  13. First-principles calculation of the optical properties of an amphiphilic cyanine dye aggregate.

    PubMed

    Haverkort, Frank; Stradomska, Anna; de Vries, Alex H; Knoester, Jasper

    2014-02-13

    Using a first-principles approach, we calculate electronic and optical properties of molecular aggregates of the dye amphi-pseudoisocyanine, whose structures we obtained from molecular dynamics (MD) simulations of the self-aggregation process. Using quantum chemistry methods, we translate the structural information into an effective time-dependent Frenkel exciton Hamiltonian for the dominant optical transitions in the aggregate. This Hamiltonian is used to calculate the absorption spectrum. Detailed analysis of the dynamic fluctuations in the molecular transition energies and intermolecular excitation transfer interactions in this Hamiltonian allows us to elucidate the origin of the relevant time scales; short time scales, on the order of up to a few hundreds of femtoseconds, result from internal motions of the dye molecules, while the longer (a few picosecond) time scales we ascribe to environmental motions. The absorption spectra of the aggregate structures obtained from MD feature a blue-shifted peak compared to that of the monomer; thus, our aggregates can be classified as H-aggregates, although considerable oscillator strength is carried by states along the entire exciton band. Comparison to the experimental absorption spectrum of amphi-PIC aggregates shows that the simulated line shape is too wide, pointing to too much disorder in the internal structure of the simulated aggregates.

  14. GPU accelerated Monte-Carlo simulation of SEM images for metrology

    NASA Astrophysics Data System (ADS)

    Verduin, T.; Lokhorst, S. R.; Hagen, C. W.

    2016-03-01

    In this work we address the computation times of numerical studies in dimensional metrology. In particular, full Monte-Carlo simulation programs for scanning electron microscopy (SEM) image acquisition are known to be notoriously slow. Our quest in reducing the computation time of SEM image simulation has led us to investigate the use of graphics processing units (GPUs) for metrology. We have succeeded in creating a full Monte-Carlo simulation program for SEM images, which runs entirely on a GPU. The physical scattering models of this GPU simulator are identical to a previous CPU-based simulator, which includes the dielectric function model for inelastic scattering and also refinements for low-voltage SEM applications. As a case study for the performance, we considered the simulated exposure of a complex feature: an isolated silicon line with rough sidewalls located on a at silicon substrate. The surface of the rough feature is decomposed into 408 012 triangles. We have used an exposure dose of 6 mC/cm2, which corresponds to 6 553 600 primary electrons on average (Poisson distributed). We repeat the simulation for various primary electron energies, 300 eV, 500 eV, 800 eV, 1 keV, 3 keV and 5 keV. At first we run the simulation on a GeForce GTX480 from NVIDIA. The very same simulation is duplicated on our CPU-based program, for which we have used an Intel Xeon X5650. Apart from statistics in the simulation, no difference is found between the CPU and GPU simulated results. The GTX480 generates the images (depending on the primary electron energy) 350 to 425 times faster than a single threaded Intel X5650 CPU. Although this is a tremendous speedup, we actually have not reached the maximum throughput because of the limited amount of available memory on the GTX480. Nevertheless, the speedup enables the fast acquisition of simulated SEM images for metrology. We now have the potential to investigate case studies in CD-SEM metrology, which otherwise would take unreasonable amounts of computation time.

  15. High-Fidelity Simulation: Preparing Dental Hygiene Students for Managing Medical Emergencies.

    PubMed

    Bilich, Lisa A; Jackson, Sarah C; Bray, Brenda S; Willson, Megan N

    2015-09-01

    Medical emergencies can occur at any time in the dental office, so being prepared to properly manage the situation can be the difference between life and death. The entire dental team must be properly trained regarding all aspects of emergency management in the dental clinic. The aim of this study was to evaluate a new educational approach using a high-fidelity simulator to prepare dental hygiene students for medical emergencies. This study utilized high-fidelity simulation (HFS) to evaluate the abilities of junior dental hygiene students at Eastern Washington University to handle a medical emergency in the dental hygiene clinic. Students were given a medical emergency scenario requiring them to assess the emergency and implement life-saving protocols in a simulated "real-life" situation using a high-fidelity manikin. Retrospective data were collected for four years from the classes of 2010 through 2013 (N=114). The results indicated that learning with simulation was effective in helping the students identify the medical emergency in a timely manner, implement emergency procedures correctly, locate and correctly utilize contents of the emergency kit, administer appropriate intervention/treatment for a specific patient, and provide the patient with appropriate follow-up instructions. For dental hygiene programs seeking to enhance their curricula in the area of medical emergencies, this study suggests that HFS is an effective tool to prepare students to appropriately handle medical emergencies. Faculty calibration is essential to standardize simulation.

  16. Comprehensive evaluation of attitude and orbit estimation using real earth magnetic field data

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie; Bar-Itzhack, Itzhack

    1997-01-01

    A single, augmented extended Kalman filter (EKF) which simultaneously and autonomously estimates spacecraft attitude and orbit was developed and tested with simulated and real magnetometer and rate data. Since the earth's magnetic field is a function of time and position, and since time is accurately known, the differences between the computed and measured magnetic field components, as measured by the magnetometers throughout the entire spacecraft's orbit, are a function of orbit and attitude errors. These differences can be used to estimate the orbit and attitude. The test results of the EKF with magnetometer and gyro data from three NASA satellites are presented and evaluated.

  17. Implementation of EAM and FS potentials in HOOMD-blue

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Zhang, Feng; Travesset, Alex; Wang, Caizhuang; Ho, Kaiming

    HOOMD-blue is a general-purpose software to perform classical molecular dynamics simulations entirely on GPUs. We provide full support for EAM and FS type potentials in HOOMD-blue, and report accuracy and efficiency benchmarks, including comparisons with the LAMMPS GPU package. Two problems were selected to test the accuracy: the determination of the glass transition temperature of Cu64.5Zr35.5 alloy using an FS potential and the calculation of pair distribution functions of Ni3Al using an EAM potential. In both cases, the results using HOOMD-blue are indistinguishable from those obtained by the GPU package in LAMMPS within statistical uncertainties. As tests for time efficiency, we benchmark time-steps per second using LAMMPS GPU and HOOMD-blue on one NVIDIA Tesla GPU. Compared to our typical LAMMPS simulations on one CPU cluster node which has 16 CPUs, LAMMPS GPU can be 3-3.5 times faster, and HOOMD-blue can be 4-5.5 times faster. We acknowledge the support from Laboratory Directed Research and Development (LDRD) of Ames Laboratory.

  18. Observation and numerical simulation of a convective initiation during COHMEX

    NASA Technical Reports Server (NTRS)

    Song, J. Aaron; Kaplan, Michael L.

    1991-01-01

    Under a synoptically undisturbed condition, a dual-peak convective lifecycle was observed with the COoperative Huntsville Meteorological EXperiment (COHMEX) observational network over a 24-hour period. The lifecycle included a multicell storm, which lasted about 6 hours, produced a peak rainrate exceeding 100 mm/hr, and initiated a downstream mesoscale convective system. The 24-hour accumulated rainfall of this event was the largest during the entire COHMEX. The downstream mesoscale convective system, unfortunately, was difficult to investigate quantitatively due to the lack of mesoscale observations. The dataset collected near the time of the multicell storm evolution, including its initiation, was one of the best datasets of COHMEX. In this study, the initiation of this multicell storm is chosen as the target of the numerical simulations.

  19. A scalable neural chip with synaptic electronics using CMOS integrated memristors.

    PubMed

    Cruz-Albrecht, Jose M; Derosier, Timothy; Srinivasa, Narayan

    2013-09-27

    The design and simulation of a scalable neural chip with synaptic electronics using nanoscale memristors fully integrated with complementary metal-oxide-semiconductor (CMOS) is presented. The circuit consists of integrate-and-fire neurons and synapses with spike-timing dependent plasticity (STDP). The synaptic conductance values can be stored in memristors with eight levels, and the topology of connections between neurons is reconfigurable. The circuit has been designed using a 90 nm CMOS process with via connections to on-chip post-processed memristor arrays. The design has about 16 million CMOS transistors and 73 728 integrated memristors. We provide circuit level simulations of the entire chip performing neuronal and synaptic computations that result in biologically realistic functional behavior.

  20. Immersive Simulation in Constructivist-Based Classroom E-Learning

    ERIC Educational Resources Information Center

    McHaney, Roger; Reiter, Lauren; Reychav, Iris

    2018-01-01

    This article describes the development of a simulation-based online course combining sound pedagogy, educational technology, and real world expertise to provide university students with an immersive experience in storage management systems. The course developed in this example does more than use a simulation, the entire course is delivered using a…

  1. Using "Game of Thrones" to Teach International Relations

    ERIC Educational Resources Information Center

    Young, Laura D.; Carranza Ko, Ñusta; Perrin, Michael

    2018-01-01

    Despite the known benefits of long-term, game-based simulations they remain underutilized in Political Science classrooms. Simulations used are typically designed to reinforce a concept and are short-lived, lasting one or two class sessions; rarely are entire courses designed around a single simulation. Creating real-world conditions in which…

  2. Real-time million-synapse simulation of rat barrel cortex.

    PubMed

    Sharp, Thomas; Petersen, Rasmus; Furber, Steve

    2014-01-01

    Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development.

  3. Real-time million-synapse simulation of rat barrel cortex

    PubMed Central

    Sharp, Thomas; Petersen, Rasmus; Furber, Steve

    2014-01-01

    Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development. PMID:24910593

  4. Surgical simulation: a urological perspective.

    PubMed

    Wignall, Geoffrey R; Denstedt, John D; Preminger, Glenn M; Cadeddu, Jeffrey A; Pearle, Margaret S; Sweet, Robert M; McDougall, Elspeth M

    2008-05-01

    Surgical education is changing rapidly as several factors including budget constraints and medicolegal concerns limit opportunities for urological trainees. New methods of skills training such as low fidelity bench trainers and virtual reality simulators offer new avenues for surgical education. In addition, surgical simulation has the potential to allow practicing surgeons to develop new skills and maintain those they already possess. We provide a review of the background, current status and future directions of surgical simulators as they pertain to urology. We performed a literature review and an overview of surgical simulation in urology. Surgical simulators are in various stages of development and validation. Several simulators have undergone extensive validation studies and are in use in surgical curricula. While virtual reality simulators offer the potential to more closely mimic reality and present entire operations, low fidelity simulators remain useful in skills training, particularly for novices and junior trainees. Surgical simulation remains in its infancy. However, the potential to shorten learning curves for difficult techniques and practice surgery without risk to patients continues to drive the development of increasingly more advanced and realistic models. Surgical simulation is an exciting area of surgical education. The future is bright as advancements in computing and graphical capabilities offer new innovations in simulator technology. Simulators must continue to undergo rigorous validation studies to ensure that time spent by trainees on bench trainers and virtual reality simulators will translate into improved surgical skills in the operating room.

  5. Stability of Granular Packings Jammed under Gravity: Avalanches and Unjamming

    NASA Astrophysics Data System (ADS)

    Merrigan, Carl; Birwa, Sumit; Tewari, Shubha; Chakraborty, Bulbul

    Granular avalanches indicate the sudden destabilization of a jammed state due to a perturbation. We propose that the perturbation needed depends on the entire force network of the jammed configuration. Some networks are stable, while others are fragile, leading to the unpredictability of avalanches. To test this claim, we simulated an ensemble of jammed states in a hopper using LAMMPS. These simulations were motivated by experiments with vibrated hoppers where the unjamming times followed power-law distributions. We compare the force networks for these simulated states with respect to their overall stability. The states are classified by how long they remain stable when subject to continuous vibrations. We characterize the force networks through both their real space geometry and representations in the associated force-tile space, extending this tool to jammed states with body forces. Supported by NSF Grant DMR1409093 and DGE1068620.

  6. Optimal control solutions to sodic soil reclamation

    NASA Astrophysics Data System (ADS)

    Mau, Yair; Porporato, Amilcare

    2016-05-01

    We study the reclamation process of a sodic soil by irrigation with water amended with calcium cations. In order to explore the entire range of time-dependent strategies, this task is framed as an optimal control problem, where the amendment rate is the control and the total rehabilitation time is the quantity to be minimized. We use a minimalist model of vertically averaged soil salinity and sodicity, in which the main feedback controlling the dynamics is the nonlinear coupling of soil water and exchange complex, given by the Gapon equation. We show that the optimal solution is a bang-bang control strategy, where the amendment rate is discontinuously switched along the process from a maximum value to zero. The solution enables a reduction in remediation time of about 50%, compared with the continuous use of good-quality irrigation water. Because of its general structure, the bang-bang solution is also shown to work for the reclamation of other soil conditions, such as saline-sodic soils. The novelty in our modeling approach is the capability of searching the entire "strategy space" for optimal time-dependent protocols. The optimal solutions found for the minimalist model can be then fine-tuned by experiments and numerical simulations, applicable to realistic conditions that include spatial variability and heterogeneities.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoogcarspel, S; Kerkmeijer, L; Lagendijk, J

    The Alderson phantom is a human shaped quality assurance tool that has been used for over 30 years in radiotherapy. The phantom can provide integrated tests of the entire chain of treatment planning and delivery. The purpose of this research was to investigate if this phantom can be used to chain test a treatment on the MRI linear accelerator (MRL) which is currently being developed at the UMC Utrecht, in collaboration with Elekta and Philips. The latter was demonstrated by chain testing the future First-in-Man treatments with this system.An Alderson phantom was used to chain test an entire treatment withmore » the MRL. First, a CT was acquired of the phantom with additional markers that are both visible on MR and CT. A treatment plan for treating bone metastases in the sacrum was made. The phantom was consecutively placed in the MRL. For MRI imaging, an 3D volume was acquired. The initially developed treatment plan was then simulated on the new MRI dataset. For simulation, both the MR and CT data was used by registering them together. Before treatment delivery a MV image was acquired and compared with a DRR that was calculated form the MR/CT registration data. Finally, the treatment was delivered. Figure 1 shows both the T1 weighted MR-image of the phantom and the CT that was registered to the MR image. Figure 2 shows both the calculated and measured MV image that was acquired by the MV panel. Figure 3 shows the dose distribution that was simulated. The total elapsed time for the entire procedure excluding irradiation was 13:35 minutes.The Alderson Phantom yields sufficient MR contrast and can be used for full MR guided radiotherapy treatment chain testing. As a result, we are able to perform an end-to-end chain test of the future First-in-Man treatments.« less

  8. Determination of Vertical Borehole and Geological Formation Properties using the Crossed Contour Method

    PubMed Central

    Leyde, Brian P.; Klein, Sanford A; Nellis, Gregory F.; Skye, Harrison

    2017-01-01

    This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model. PMID:28785125

  9. Skin hydration analysis by experiment and computer simulations and its implications for diapered skin.

    PubMed

    Saadatmand, M; Stone, K J; Vega, V N; Felter, S; Ventura, S; Kasting, G; Jaworska, J

    2017-11-01

    Experimental work on skin hydration is technologically challenging, and mostly limited to observations where environmental conditions are constant. In some cases, like diapered baby skin, such work is practically unfeasible, yet it is important to understand potential effects of diapering on skin condition. To overcome this challenge, in part, we developed a computer simulation model of reversible transient skin hydration effects. Skin hydration model by Li et al. (Chem Eng Sci, 138, 2015, 164) was further developed to simulate transient exposure conditions where relative humidity (RH), wind velocity, air, and skin temperature can be any function of time. Computer simulations of evaporative water loss (EWL) decay after different occlusion times were compared with experimental data to calibrate the model. Next, we used the model to investigate EWL and SC thickness in different diapering scenarios. Key results from the experimental work were: (1) For occlusions by RH=100% and free water longer than 30 minutes the absorbed amount of water is almost the same; (2) Longer occlusion times result in higher water absorption by the SC. The EWL decay and skin water content predictions were in agreement with experimental data. Simulations also revealed that skin under occlusion hydrates mainly because the outflux is blocked, not because it absorbs water from the environment. Further, simulations demonstrated that hydration level is sensitive to time, RH and/or free water on skin. In simulated diapering scenarios, skin maintained hydration content very close to the baseline conditions without a diaper for the entire duration of a 24 hours period. Different diapers/diaper technologies are known to have different profiles in terms of their ability to provide wetness protection, which can result in consumer-noticeable differences in wetness. Simulation results based on published literature using data from a number of different diapers suggest that diapered skin hydrates within ranges considered reversible. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Simulations of phase space distributions of storm time proton ring current

    NASA Technical Reports Server (NTRS)

    Chen, Margaret W.; Lyons, Larry R.; Schulz, Michael

    1994-01-01

    We use results of guiding-center simulations of ion transport to map phase space densities of the stormtime proton ring current. We model a storm as a sequence of substorm-associated enhancements in the convection electric field. Our pre-storm phase space distribution is an analytical solution to a steady-state transport model in which quiet-time radial diffusion balances charge exchange. This pre-storm phase space spectra at L approximately 2 to 4 reproduce many of the features found in observed quiet-time spectra. Using results from simulations of ion transport during model storms having main phases of 3, 6, and 12 hr, we map phase space distributions from the pre-storm distribution in accordance with Liouville's theorem. We find stormtime enhancements in the phase space densities at energies E approximately 30-160 keV for L approximately 2.5 to 4. These enhancements agree well with the observed stormtime ring current. For storms with shorter main phases (approximately 3 hr), the enhancements are caused mainly by the trapping of ions injected from open night side trajectories, and diffusive transport of higher-energy (greater than or approximately 160 keV) ions contributes little to the stormtime ring current. However, the stormtime ring current is augmented also by the diffusive transport of higher-energy ions (E greater than or approximately 160 keV) durinng stroms having longer main phases (greater than or approximately 6 hr). In order to account for the increase in Dst associated with the formation of the stormtime ring current, we estimate the enhancement in particle-energy content that results from stormtime ion transport in the equatorial magnetosphere. We find that transport alone cannot account for the entire increase in absolute value of Dst typical of a major storm. However, we can account for the entire increase in absolute value of Dst by realistically increasing the stormtime outer boundary value of the phase space density relative to the quiet-time value. We compute the magnetic field produced by the ring current itself and find that radial profiles of the magnetic field depression resemble those obtained from observational data.

  11. Project 0-1800 : NAFTA impacts on operations : executive summary

    DOT National Transportation Integrated Search

    2001-07-01

    Project 0-1800 pioneered the use of modern micro-simulation models to analyze the complex procedures involved in international border crossing in Texas. Animated models simulate the entire southbound commercial traffic flow in two important internati...

  12. System Modeling of a MEMS Vibratory Gyroscope and Integration to Circuit Simulation.

    PubMed

    Kwon, Hyukjin J; Seok, Seyeong; Lim, Geunbae

    2017-11-18

    Recently, consumer applications have dramatically created the demand for low-cost and compact gyroscopes. Therefore, on the basis of microelectromechanical systems (MEMS) technology, many gyroscopes have been developed and successfully commercialized. A MEMS gyroscope consists of a MEMS device and an electrical circuit for self-oscillation and angular-rate detection. Since the MEMS device and circuit are interactively related, the entire system should be analyzed together to design or test the gyroscope. In this study, a MEMS vibratory gyroscope is analyzed based on the system dynamic modeling; thus, it can be mathematically expressed and integrated into a circuit simulator. A behavioral simulation of the entire system was conducted to prove the self-oscillation and angular-rate detection and to determine the circuit parameters to be optimized. From the simulation, the operating characteristic according to the vacuum pressure and scale factor was obtained, which indicated similar trends compared with those of the experimental results. The simulation method presented in this paper can be generalized to a wide range of MEMS devices.

  13. Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multilayer Calorimeters

    NASA Astrophysics Data System (ADS)

    Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin

    2018-01-01

    Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.

  14. Delivering better power: the role of simulation in reducing the environmental impact of aircraft engines.

    PubMed

    Menzies, Kevin

    2014-08-13

    The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  15. NIMROD resistive magnetohydrodynamic simulations of spheromak physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooper, E. B.; Cohen, B. I.; McLean, H. S.

    The physics of spheromak plasmas is addressed by time-dependent, three-dimensional, resistive magnetohydrodynamic simulations with the NIMROD code [C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)]. Included in some detail are the formation of a spheromak driven electrostatically by a coaxial plasma gun with a flux-conserver geometry and power systems that accurately model the sustained spheromak physics experiment [R. D. Wood et al., Nucl. Fusion 45, 1582 (2005)]. The controlled decay of the spheromak plasma over several milliseconds is also modeled as the programmable current and voltage relax, resulting in simulations of entire experimental pulses. Reconnection phenomena andmore » the effects of current profile evolution on the growth of symmetry-breaking toroidal modes are diagnosed; these in turn affect the quality of magnetic surfaces and the energy confinement. The sensitivity of the simulation results addresses variations in both physical and numerical parameters, including spatial resolution. There are significant points of agreement between the simulations and the observed experimental behavior, e.g., in the evolution of the magnetics and the sensitivity of the energy confinement to the presence of symmetry-breaking magnetic fluctuations.« less

  16. NIMROD Resistive Magnetohydrodynamic Simulations of Spheromak Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooper, E B; Cohen, B I; McLean, H S

    The physics of spheromak plasmas is addressed by time-dependent, three-dimensional, resistive magneto-hydrodynamic simulations with the NIMROD code. Included in some detail are the formation of a spheromak driven electrostatically by a coaxial plasma gun with a flux-conserver geometry and power systems that accurately model the Sustained Spheromak Physics Experiment (SSPX) (R. D. Wood, et al., Nucl. Fusion 45, 1582 (2005)). The controlled decay of the spheromak plasma over several milliseconds is also modeled as the programmable current and voltage relax, resulting in simulations of entire experimental pulses. Reconnection phenomena and the effects of current profile evolution on the growth ofmore » symmetry-breaking toroidal modes are diagnosed; these in turn affect the quality of magnetic surfaces and the energy confinement. The sensitivity of the simulation results address variations in both physical and numerical parameters, including spatial resolution. There are significant points of agreement between the simulations and the observed experimental behavior, e.g., in the evolution of the magnetics and the sensitivity of the energy confinement to the presence of symmetry-breaking magnetic fluctuations.« less

  17. A new technique for observationally derived boundary conditions for space weather

    NASA Astrophysics Data System (ADS)

    Pagano, Paolo; Mackay, Duncan Hendry; Yeates, Anthony Robinson

    2018-04-01

    Context. In recent years, space weather research has focused on developing modelling techniques to predict the arrival time and properties of coronal mass ejections (CMEs) at the Earth. The aim of this paper is to propose a new modelling technique suitable for the next generation of Space Weather predictive tools that is both efficient and accurate. The aim of the new approach is to provide interplanetary space weather forecasting models with accurate time dependent boundary conditions of erupting magnetic flux ropes in the upper solar corona. Methods: To produce boundary conditions, we couple two different modelling techniques, MHD simulations and a quasi-static non-potential evolution model. Both are applied on a spatial domain that covers the entire solar surface, although they extend over a different radial distance. The non-potential model uses a time series of observed synoptic magnetograms to drive the non-potential quasi-static evolution of the coronal magnetic field. This allows us to follow the formation and loss of equilibrium of magnetic flux ropes. Following this a MHD simulation captures the dynamic evolution of the erupting flux rope, when it is ejected into interplanetary space. Results.The present paper focuses on the MHD simulations that follow the ejection of magnetic flux ropes to 4 R⊙. We first propose a technique for specifying the pre-eruptive plasma properties in the corona. Next, time dependent MHD simulations describe the ejection of two magnetic flux ropes, that produce time dependent boundary conditions for the magnetic field and plasma at 4 R⊙ that in future may be applied to interplanetary space weather prediction models. Conclusions: In the present paper, we show that the dual use of quasi-static non-potential magnetic field simulations and full time dependent MHD simulations can produce realistic inhomogeneous boundary conditions for space weather forecasting tools. Before a fully operational model can be produced there are a number of technical and scientific challenges that still need to be addressed. Nevertheless, we illustrate that coupling quasi-static and MHD simulations in this way can significantly reduce the computational time required to produce realistic space weather boundary conditions.

  18. Computer simulation of on-orbit manned maneuvering unit operations

    NASA Technical Reports Server (NTRS)

    Stuart, G. M.; Garcia, K. D.

    1986-01-01

    Simulation of spacecraft on-orbit operations is discussed in reference to Martin Marietta's Space Operations Simulation laboratory's use of computer software models to drive a six-degree-of-freedom moving base carriage and two target gimbal systems. In particular, key simulation issues and related computer software models associated with providing real-time, man-in-the-loop simulations of the Manned Maneuvering Unit (MMU) are addressed with special attention given to how effectively these models and motion systems simulate the MMU's actual on-orbit operations. The weightless effects of the space environment require the development of entirely new devices for locomotion. Since the access to space is very limited, it is necessary to design, build, and test these new devices within the physical constraints of earth using simulators. The simulation method that is discussed here is the technique of using computer software models to drive a Moving Base Carriage (MBC) that is capable of providing simultaneous six-degree-of-freedom motions. This method, utilized at Martin Marietta's Space Operations Simulation (SOS) laboratory, provides the ability to simulate the operation of manned spacecraft, provides the pilot with proper three-dimensional visual cues, and allows training of on-orbit operations. The purpose here is to discuss significant MMU simulation issues, the related models that were developed in response to these issues and how effectively these models simulate the MMU's actual on-orbiter operations.

  19. Workplace Simulation: An Integrated Approach to Training University Students in Professional Communication

    ERIC Educational Resources Information Center

    Ismail, Norhayati; Sabapathy, Chitra

    2016-01-01

    In the redesign of a professional communication course for real estate students, a workplace simulation was implemented, spanning the entire 12-week duration of the course. The simulation was achieved through the creation of an online company presence, the infusion of communication typically encountered in the workplace, and an intensive and…

  20. Transfer of training and simulator qualification or myth and folklore in helicopter simulation

    NASA Technical Reports Server (NTRS)

    Dohme, Jack

    1992-01-01

    Transfer of training studies at Fort Rucker using the backward-transfer paradigm have shown that existing flight simulators are not entirely adequate for meeting training requirements. Using an ab initio training research simulator, a simulation of the UH-1, training effectiveness ratios were developed. The data demonstrate it to be a cost-effective primary trainer. A simulator qualification method was suggested in which a combination of these transfer-of-training paradigms is used to determine overall simulator fidelity and training effectiveness.

  1. Lower bound on the time complexity of local adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Chen, Zhenghao; Koh, Pang Wei; Zhao, Yan

    2006-11-01

    The adiabatic theorem of quantum physics has been, in recent times, utilized in the design of local search quantum algorithms, and has been proven to be equivalent to standard quantum computation, that is, the use of unitary operators [D. Aharonov in Proceedings of the 45th Annual Symposium on the Foundations of Computer Science, 2004, Rome, Italy (IEEE Computer Society Press, New York, 2004), pp. 42-51]. Hence, the study of the time complexity of adiabatic evolution algorithms gives insight into the computational power of quantum algorithms. In this paper, we present two different approaches of evaluating the time complexity for local adiabatic evolution using time-independent parameters, thus providing effective tests (not requiring the evaluation of the entire time-dependent gap function) for the time complexity of newly developed algorithms. We further illustrate our tests by displaying results from the numerical simulation of some problems, viz. specially modified instances of the Hamming weight problem.

  2. Seamless atmospheric modeling across the hydrostatic-nonhydrostatic scales - preliminary results using an unstructured-Voronoi mesh for weather prediction.

    NASA Astrophysics Data System (ADS)

    Skamarock, W. C.

    2015-12-01

    One of the major problems in atmospheric model applications is the representation of deep convection within the models; explicit simulation of deep convection on fine meshes performs much better than sub-grid parameterized deep convection on coarse meshes. Unfortunately, the high cost of explicit convective simulation has meant it has only been used to down-scale global simulations in weather prediction and regional climate applications, typically using traditional one-way interactive nesting technology. We have been performing real-time weather forecast tests using a global non-hydrostatic atmospheric model (the Model for Prediction Across Scales, MPAS) that employs a variable-resolution unstructured Voronoi horizontal mesh (nominally hexagons) to span hydrostatic to nonhydrostatic scales. The smoothly varying Voronoi mesh eliminates many downscaling problems encountered using traditional one- or two-way grid nesting. Our test weather forecasts cover two periods - the 2015 Spring Forecast Experiment conducted at the NOAA Storm Prediction Center during the month of May in which we used a 50-3 km mesh, and the PECAN field program examining nocturnal convection over the US during the months of June and July in which we used a 15-3 km mesh. An important aspect of this modeling system is that the model physics be scale-aware, particularly the deep convection parameterization. These MPAS simulations employ the Grell-Freitas scale-aware convection scheme. Our test forecasts show that the scheme produces a gradual transition in the deep convection, from the deep unstable convection being handled entirely by the convection scheme on the coarse mesh regions (dx > 15 km), to the deep convection being almost entirely explicit on the 3 km NA region of the meshes. We will present results illustrating the performance of critical aspects of the MPAS model in these tests.

  3. Following the Ions through a Mass Spectrometer with Atmospheric Pressure Interface: Simulation of Complete Ion Trajectories from Ion Source to Mass Analyzer.

    PubMed

    Zhou, Xiaoyu; Ouyang, Zheng

    2016-07-19

    Ion trajectory simulation is an important and useful tool in instrumentation development for mass spectrometry. Accurate simulation of the ion motion through the mass spectrometer with atmospheric pressure ionization source has been extremely challenging, due to the complexity in gas hydrodynamic flow field across a wide pressure range as well as the computational burden. In this study, we developed a method of generating the gas flow field for an entire mass spectrometer with an atmospheric pressure interface. In combination with the electric force, for the first time simulation of ion trajectories from an atmospheric pressure ion source to a mass analyzer in vacuum has been enabled. A stage-by-stage ion repopulation method has also been implemented for the simulation, which helped to avoid an intolerable computational burden for simulations at high pressure regions while it allowed statistically meaningful results obtained for the mass analyzer. It has been demonstrated to be suitable to identify a joint point for combining the high and low pressure fields solved individually. Experimental characterization has also been done to validate the new method for simulation. Good agreement was obtained between simulated and experimental results for ion transfer though an atmospheric pressure interface with a curtain gas.

  4. A Dynamic Bayesian Network model for long-term simulation of clinical complications in type 1 diabetes.

    PubMed

    Marini, Simone; Trifoglio, Emanuele; Barbarini, Nicola; Sambo, Francesco; Di Camillo, Barbara; Malovini, Alberto; Manfrini, Marco; Cobelli, Claudio; Bellazzi, Riccardo

    2015-10-01

    The increasing prevalence of diabetes and its related complications is raising the need for effective methods to predict patient evolution and for stratifying cohorts in terms of risk of developing diabetes-related complications. In this paper, we present a novel approach to the simulation of a type 1 diabetes population, based on Dynamic Bayesian Networks, which combines literature knowledge with data mining of a rich longitudinal cohort of type 1 diabetes patients, the DCCT/EDIC study. In particular, in our approach we simulate the patient health state and complications through discretized variables. Two types of models are presented, one entirely learned from the data and the other partially driven by literature derived knowledge. The whole cohort is simulated for fifteen years, and the simulation error (i.e. for each variable, the percentage of patients predicted in the wrong state) is calculated every year on independent test data. For each variable, the population predicted in the wrong state is below 10% on both models over time. Furthermore, the distributions of real vs. simulated patients greatly overlap. Thus, the proposed models are viable tools to support decision making in type 1 diabetes. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    NASA Astrophysics Data System (ADS)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  6. iTesla Power Systems Library (iPSL): A Modelica library for phasor time-domain simulations

    NASA Astrophysics Data System (ADS)

    Vanfretti, L.; Rabuzin, T.; Baudette, M.; Murad, M.

    The iTesla Power Systems Library (iPSL) is a Modelica package providing a set of power system components for phasor time-domain modeling and simulation. The Modelica language provides a systematic approach to develop models using a formal mathematical description, that uniquely specifies the physical behavior of a component or the entire system. Furthermore, the standardized specification of the Modelica language (Modelica Association [1]) enables unambiguous model exchange by allowing any Modelica-compliant tool to utilize the models for simulation and their analyses without the need of a specific model transformation tool. As the Modelica language is being developed with open specifications, any tool that implements these requirements can be utilized. This gives users the freedom of choosing an Integrated Development Environment (IDE) of their choice. Furthermore, any integration solver can be implemented within a Modelica tool to simulate Modelica models. Additionally, Modelica is an object-oriented language, enabling code factorization and model re-use to improve the readability of a library by structuring it with object-oriented hierarchy. The developed library is released under an open source license to enable a wider distribution and let the user customize it to their specific needs. This paper describes the iPSL and provides illustrative application examples.

  7. Ground Contact Model for Mars Science Laboratory Mission Simulations

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Way, David

    2012-01-01

    The Program to Optimize Simulated Trajectories II (POST 2) has been successful in simulating the flight of launch vehicles and entry bodies on earth and other planets. POST 2 has been the primary simulation tool for the Entry Descent, and Landing (EDL) phase of numerous Mars lander missions such as Mars Pathfinder in 1997, the twin Mars Exploration Rovers (MER-A and MER-B) in 2004, Mars Phoenix lander in 2007, and it is now the main trajectory simulation tool for Mars Science Laboratory (MSL) in 2012. In all previous missions, the POST 2 simulation ended before ground impact, and a tool other than POST 2 simulated landing dynamics. It would be ideal for one tool to simulate the entire EDL sequence, thus avoiding errors that could be introduced by handing off position, velocity, or other fight parameters from one simulation to the other. The desire to have one continuous end-to-end simulation was the motivation for developing the ground interaction model in POST 2. Rover landing, including the detection of the postlanding state, is a very critical part of the MSL mission, as the EDL landing sequence continues for a few seconds after landing. The method explained in this paper illustrates how a simple ground force interaction model has been added to POST 2, which allows simulation of the entire EDL from atmospheric entry through touchdown.

  8. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  9. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  10. LightForce Photon-Pressure Collision Avoidance: Updated Efficiency Analysis Utilizing a Highly Parallel Simulation Approach

    NASA Technical Reports Server (NTRS)

    Stupl, Jan; Faber, Nicolas; Foster, Cyrus; Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Nuttall, Andrew; Henze, Chris; Levit, Creon

    2014-01-01

    This paper provides an updated efficiency analysis of the LightForce space debris collision avoidance scheme. LightForce aims to prevent collisions on warning by utilizing photon pressure from ground based, commercial off the shelf lasers. Past research has shown that a few ground-based systems consisting of 10 kilowatt class lasers directed by 1.5 meter telescopes with adaptive optics could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. Our simulation approach utilizes the entire Two Line Element (TLE) catalogue in LEO for a given day as initial input. Least-squares fitting of a TLE time series is used for an improved orbit estimate. We then calculate the probability of collision for all LEO objects in the catalogue for a time step of the simulation. The conjunctions that exceed a threshold probability of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the probability of collision and evaluate the efficiency of the system. This paper describes new simulations with three updated aspects: 1) By utilizing a highly parallel simulation approach employing hundreds of processors, we have extended our analysis to a much broader dataset. The simulation time is extended to one year. 2) We analyze not only the efficiency of LightForce on conjunctions that naturally occur, but also take into account conjunctions caused by orbit perturbations due to LightForce engagements. 3) We use a new simulation approach that is regularly updating the LightForce engagement strategy, as it would be during actual operations. In this paper we present our simulation approach to parallelize the efficiency analysis, its computational performance and the resulting expected efficiency of the LightForce collision avoidance system. Results indicate that utilizing a network of four LightForce stations with 20 kilowatt lasers, 85% of all conjunctions with a probability of collision Pc > 10 (sup -6) can be mitigated.

  11. Supercomputer Exposes Enzyme's Secrets | News | NREL

    Science.gov Websites

    simulation of an enzyme from the fungus Trichoderma reesei (Cel7A). The simulation showed that a part of an enzyme, the linker, may play a necessary role in breaking down biomass into the sugars used to make long being a microsecond-simulation of the entire enzyme on the surface of cellulose," Beckham

  12. Modelling debris and shrapnel generation in inertial confinement fusion experiments

    DOE PAGES

    Eder, D. C.; Fisher, A. C.; Koniges, A. E.; ...

    2013-10-24

    Modelling and mitigation of damage are crucial for safe and economical operation of high-power laser facilities. Experiments at the National Ignition Facility use a variety of targets with a range of laser energies spanning more than two orders of magnitude (~14 kJ to ~1.9 MJ). Low-energy inertial confinement fusion experiments are used to study early-time x-ray load symmetry on the capsule, shock timing, and other physics issues. For these experiments, a significant portion of the target is not completely vaporized and late-time (hundreds of ns) simulations are required to study the generation of debris and shrapnel from these targets. Damagemore » to optics and diagnostics from shrapnel is a major concern for low-energy experiments. Here, we provide the first full-target simulations of entire cryogenic targets, including the Al thermal mechanical package and Si cooling rings. We use a 3D multi-physics multi-material hydrodynamics code, ALE-AMR, for these late-time simulations. The mass, velocity, and spatial distribution of shrapnel are calculated for three experiments with laser energies ranging from 14 to 250 kJ. We calculate damage risk to optics and diagnostics for these three experiments. For the lowest energy re-emit experiment, we provide a detailed analysis of the effects of shrapnel impacts on optics and diagnostics and compare with observations of damage sites.« less

  13. OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Lu, Lu; Li, He; Grinberg, Leopold; Sachdeva, Vipin; Evangelinos, Constantinos; Karniadakis, George

    We present a from-scratch development of OpenRBC, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented in silico experiment - simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles - on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a lattice-free cell list, i.e. the key spatial searching data structure in our code, in O (N) time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCA-aware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in time-to-solution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. This work was supported by the Department of Energy (DOE) Collaboratory on Mathematics for Mesoscopic Model- ing of Materials (CM4). YHT acknowledges partial financial support from an IBM Ph.D. Scholarship Award.

  14. Explicit reference governor for linear systems

    NASA Astrophysics Data System (ADS)

    Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo

    2018-06-01

    The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.

  15. Time of Flight Electrochemistry: Diffusion Coefficient Measurements Using Interdigitated Array (IDA) Electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Fei; Kolesov, Grigory; Parkinson, Bruce A.

    2014-09-26

    A simple and straightforward method for measuring diffusion coefficients using interdigitated array (IDA) electrodes is reported. The method does not require that the exact electrode area be known but depends only the size of the gap between the IDA electrode pairs. Electroactive molecules produced at the generator electrode of the IDA by a voltage step or scan can diffuse to the collector electrode and the time delay before the current for the reverse electrochemical reaction is detected at the collector is used to calculate the diffusion coefficient. The measurement of the diffusion rate of Ru(NH3)6+2 in aqueous solution has beenmore » used as an example measuring diffusion coefficients using this method. Additionally, a digital simulation of the electrochemical response of the IDA electrodes was used to simulate the entire current/voltage/time behavior of the system and verify the experimentally measured diffusion coefficients. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the Department of Energy, Office of Science, Office of Basic Energy Sciences.« less

  16. Simulations of Solar Wind Turbulence

    NASA Technical Reports Server (NTRS)

    Goldstein, Melvyn L.; Usmanov, A. V.; Roberts, D. A.

    2008-01-01

    Recently we have restructured our approach to simulating magnetohydrodynamic (MHD) turbulence in the solar wind. Previously, we had defined a 'virtual' heliosphere that contained, for example, a tilted rotating current sheet, microstreams, quasi-two-dimensional fluctuations as well as Alfven waves. In this new version of the code, we use the global, time-stationary, WKB Alfven wave-driven solar wind model developed by Usmanov and described in Usmanov and Goldstein [2003] to define the initial state of the system. Consequently, current sheets, and fast and slow streams are computed self-consistently from an inner, photospheric, boundary. To this steady-state configuration, we add fluctuations close to, but above, the surface where the flow become super-Alfvenic. The time-dependent MHD equations are then solved using a semi-discrete third-order Central Weighted Essentially Non-Oscillatory (CWENO) numerical scheme. The computational domain now includes the entire sphere; the geometrical singularity at the poles is removed using the multiple grid approach described in Usmanov [1996]. Wave packets are introduced at the inner boundary such as to satisfy Faraday's Law [Yeh and Dryer, 1985] and their nonlinear evolution are followed in time.

  17. SiMon: Simulation Monitor for Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming

    2017-09-01

    Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.

  18. Hydrologic modeling of two glaciated watersheds in Northeast Pennsylvania

    USGS Publications Warehouse

    Srinivasan, M.S.; Hamlett, J.M.; Day, R.L.; Sams, J.I.; Petersen, G.W.

    1998-01-01

    A hydrologic modeling study, using the Hydrologic Simulation Program - FORTRAN (HSPF), was conducted in two glaciated watersheds, Purdy Creek and Ariel Creek in northeastern Pennsylvania. Both watersheds have wetlands and poorly drained soils due to low hydraulic conductivity and presence of fragipans. The HSPF model was calibrated in the Purdy Creek watershed and verified in the Ariel Creek watershed for June 1992 to December 1993 period. In Purdy Creek, the total volume of observed streamflow during the entire simulation period was 13.36 x 106 m3 and the simulated streamflow volume was 13.82 x 106 m3 (5 percent difference). For the verification simulation in Ariel Creek, the difference between the total observed and simulated flow volumes was 17 percent. Simulated peak flow discharges were within two hours of the observed for 30 of 46 peak flow events (discharge greater than 0.1 m3/sec) in Purdy Creek and 27 of 53 events in Ariel Creek. For 22 of the 46 events in Purdy Creek and 24 of 53 in Ariel Creek, the differences between the observed and simulated peak discharge rates were less than 30 percent. These 22 events accounted for 63 percent of total volume of streamflow observed during the selected 46 peak flow events in Purdy Creek. In Ariel Creek, these 24 peak flow events accounted for 62 percent of the total flow observed during all peak flow events. Differences in observed and simulated peak flow rates and volumes (on a percent basis) were greater during the snowmelt runoff events and summer periods than for other times.A hydrologic modeling study, using the Hydrologic Simulation Program - FORTRAN (HSPF), was conducted in two glaciated watersheds, Purdy Creek and Ariel Creek in northeastern Pennsylvania. Both watersheds have wetlands and poorly drained soils due to low hydraulic conductivity and presence of fragipans. The HSPF model was calibrated in the Purdy Creek watershed and verified in the Ariel Creek watershed for June 1992 to December 1993 period. In Purdy Creek, the total volume of observed streamflow during the entire simulation period was 13.36??106 m3 and the simulated streamflow volume was 13.82??106 m3 (5 percent difference). For the verification simulation in Ariel Creek, the difference between the total observed and simulated flow volumes was 17 percent. Simulated peak flow discharges were within two hours of the observed for 30 of 46 peak flow events (discharge greater than 0.1 m3/sec) in Purdy Creek and 27 of 53 events in Ariel Creek. For 22 of the 46 events in Purdy Creek and 24 of 53 in Ariel Creek, the differences between the observed and simulated peak discharge rates were less than 30 percent. These 22 events accounted for 63 percent of total volume of streamflow observed during the selected 46 peak flow events in Purdy Creek. In Ariel Creek, these 24 peak flow events accounted for 62 percent of the total flow observed during all peak flow events. Differences in observed and simulated peak flow rates and volumes (on a percent basis) were greater during the snowmelt runoff events and summer periods than for other times.

  19. A study with ESI PAM-STAMP® on the influence of tool deformation on final part quality during a forming process

    NASA Astrophysics Data System (ADS)

    Vrolijk, Mark; Ogawa, Takayuki; Camanho, Arthur; Biasutti, Manfredi; Lorenz, David

    2018-05-01

    As a result from the ever increasing demand to produce lighter vehicles, more and more advanced high-strength materials are used in automotive industry. Focusing on sheet metal cold forming processes, these materials require high pressing forces and exhibit large springback after forming. Due to the high pressing forces deformations occur in the tooling geometry, introducing dimensional inaccuracies in the blank and potentially impact the final springback behavior. As a result the tool deformations can have an impact on the final assembly or introduce cosmetic defects. Often several iterations are required in try-out to obtain the required tolerances, with costs going up to as much as 30% of the entire product development cost. To investigate the sheet metal part feasibility and quality, in automotive industry CAE tools are widely used. However, in current practice the influence of the tool deformations on the final part quality is generally neglected and simulations are carried out with rigid tools to avoid drastically increased calculation times. If the tool deformation is analyzed through simulation it is normally done at the end of the drawing prosses, when contact conditions are mapped on the die structure and a static analysis is performed to check the deflections of the tool. But this method does not predict the influence of these deflections on the final quality of the part. In order to take tool deformations into account during drawing simulations, ESI has developed the ability to couple solvers efficiently in a way the tool deformations can be real-time included in the drawing simulation without high increase in simulation time compared to simulations with rigid tools. In this paper a study will be presented which demonstrates the effect of tool deformations on the final part quality.

  20. Opticks : GPU Optical Photon Simulation for Particle Physics using NVIDIA® OptiX™

    NASA Astrophysics Data System (ADS)

    C, Blyth Simon

    2017-10-01

    Opticks is an open source project that integrates the NVIDIA OptiX GPU ray tracing engine with Geant4 toolkit based simulations. Massive parallelism brings drastic performance improvements with optical photon simulation speedup expected to exceed 1000 times Geant4 when using workstation GPUs. Optical photon simulation time becomes effectively zero compared to the rest of the simulation. Optical photons from scintillation and Cherenkov processes are allocated, generated and propagated entirely on the GPU, minimizing transfer overheads and allowing CPU memory usage to be restricted to optical photons that hit photomultiplier tubes or other photon detectors. Collecting hits into standard Geant4 hit collections then allows the rest of the simulation chain to proceed unmodified. Optical physics processes of scattering, absorption, scintillator reemission and boundary processes are implemented in CUDA OptiX programs based on the Geant4 implementations. Wavelength dependent material and surface properties as well as inverse cumulative distribution functions for reemission are interleaved into GPU textures providing fast interpolated property lookup or wavelength generation. Geometry is provided to OptiX in the form of CUDA programs that return bounding boxes for each primitive and ray geometry intersection positions. Some critical parts of the geometry such as photomultiplier tubes have been implemented analytically with the remainder being tessellated. OptiX handles the creation and application of a choice of acceleration structures such as boundary volume hierarchies and the transparent use of multiple GPUs. OptiX supports interoperation with OpenGL and CUDA Thrust that has enabled unprecedented visualisations of photon propagations to be developed using OpenGL geometry shaders to provide interactive time scrubbing and CUDA Thrust photon indexing to enable interactive history selection.

  1. Analysis of capacity and traffic operations impacts of the World Trade Bridge in Laredo

    DOT National Transportation Integrated Search

    2001-07-01

    Project 0-1800 pioneered the use of modern micro-simulation software to analyze the complex procedures involved in international border crossings. The animated models simulate the entire southbound commercial traffic flow, starting with U.S. Customs ...

  2. Simulation of Deep Water Renewal in Crater Lake, Oregon, USA under Current and Future Climate Conditions

    NASA Astrophysics Data System (ADS)

    Piccolroaz, S.; Wood, T. M.; Wherry, S.; Girdner, S.

    2015-12-01

    We applied a 1-dimensional lake model developed to simulate deep mixing related to thermobaric instabilities in temperate lakes to Crater Lake, a 590-m deep caldera lake in Oregon's Cascade Range known for its stunning deep blue color and extremely clear water, in order to determine the frequency of deep water renewal in future climate conditions. The lake model was calibrated with 6 years of water temperature profiles, and then simulated 10 years of validation data with an RMSE ranging from 0.81°C at 50 m depth to 0.04°C at 350-460 m depth. The simulated time series of heat content in the deep lake accurately captured extreme years characterized by weak and strong deep water renewal. The lake model uses wind speed and lake surface temperature (LST) as boundary conditions. LST projections under six climate scenarios from the CMIP5 intermodel comparison project (2 representative concentration pathways X 3 general circulation models) were evaluated with air2water, a simple lumped model that only requires daily values of downscaled air temperature. air2water was calibrated with data from 1993-2011, resulting in a RMSE between simulated and observed daily LST values of 0.68°C. All future climate scenarios project increased water temperature throughout the water column and a substantive reduction in the frequency of deepwater renewal events. The least extreme scenario (CNRM-CM5, RCP4.5) projects the frequency of deepwater renewal events to decrease from about 1 in 2 years in the present to about 1 in 3 years by 2100. The most extreme scenario (HadGEM2-ES, RCP8.5) projects the frequency of deepwater renewal events to be less than 1 in 7 years by 2100 and lake surface temperatures never cooling to less than 4°C after 2050. In all RCP4.5 simulations the temperature of the entire water column is greater than 4°C for increasing periods of time. In the RCP8.5 simulations, the temperature of the entire water column is greater than 4°C year round by the year 2060 (HadGEM2) or 2080 (CNRM-CM5); thus, the conditions required for thermobaric instability induced mixing become rare or non-existent in these projections. The results indicate that the frequency of deep water renewal events could change substantially in a warmer future climate, potentially altering the lake ecosystem and water clarity.

  3. Developing a workstation-based, real-time simulation for rapid handling qualities evaluations during design

    NASA Technical Reports Server (NTRS)

    Anderson, Frederick; Biezad, Daniel J.

    1994-01-01

    This paper describes the Rapid Aircraft DynamIcs AssessmeNt (RADIAN) project - an integration of the Aircraft SYNThesis (ACSTNT) design code with the USAD DATCOM code that estimates stability derivatives. Both of these codes are available to universities. These programs are then linked to flight simulation and flight controller synthesis tools and resulting design is evaluated on a graphics workstation. The entire process reduces the preliminary design time by an order of magnitude and provides an initial handling qualities evaluation of the design coupled to a control law. The integrated design process is applicable to both conventional aircraft taken from current textbooks and to unconventional designs emphasizing agility and propulsive control of attitude. The interactive and concurrent nature of the design process has been well received by industry and by design engineers at NASA. The process is being implemented into the design curriculum and is being used by students who view it as a significant advance over prior methods.

  4. Guidance, Navigation and Control (GN and C) Design Overview and Flight Test Results from NASA's Max Launch Abort System (MLAS)

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.; Lanzi, Raymond J.; Ward, Philip R.

    2010-01-01

    The National Aeronautics and Space Administration Engineering and Safety Center designed, developed and flew the alternative Max Launch Abort System (MLAS) as risk mitigation for the baseline Orion spacecraft launch abort system already in development. The NESC was tasked with both formulating a conceptual objective system design of this alternative MLAS as well as demonstrating this concept with a simulated pad abort flight test. Less than 2 years after Project start the MLAS simulated pad abort flight test was successfully conducted from Wallops Island on July 8, 2009. The entire flight test duration was 88 seconds during which time multiple staging events were performed and nine separate critically timed parachute deployments occurred as scheduled. This paper provides an overview of the guidance navigation and control technical approaches employed on this rapid prototyping activity; describes the methodology used to design the MLAS flight test vehicle; and lessons that were learned during this rapid prototyping project are also summarized.

  5. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP; Soares, Thereza A.

    2007-12-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  6. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP

    2008-03-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  7. Simulation-trained junior residents perform better than general surgeons on advanced laparoscopic cases.

    PubMed

    Boza, Camilo; León, Felipe; Buckel, Erwin; Riquelme, Arnoldo; Crovari, Fernando; Martínez, Jorge; Aggarwal, Rajesh; Grantcharov, Teodor; Jarufe, Nicolás; Varas, Julián

    2017-01-01

    Multiple simulation training programs have demonstrated that effective transfer of skills can be attained and applied into a more complex scenario, but evidence regarding transfer to the operating room is limited. To assess junior residents trained with simulation performing an advanced laparoscopic procedure in the OR and compare results to those of general surgeons without simulation training and expert laparoscopic surgeons. Experimental study: After a validated 16-session advanced laparoscopy simulation training program, junior trainees were compared to general surgeons (GS) with no simulation training and expert bariatric surgeons (BS) in performing a stapled jejuno-jejunostomy (JJO) in the OR. Global rating scale (GRS) and specific rating scale scores, operative time and the distance traveled by both hands measured with a tracking device, were assessed. In addition, all perioperative and immediate postoperative morbidities were registered. Ten junior trainees, 12 GS and 5 BS experts were assessed performing a JJO in the OR. All trainees completed the entire JJO in the OR without any takeovers by the BS. Six (50 %) BS takeovers took place in the GS group. Trainees had significantly better results in all measured outcomes when compared to GS with considerable higher GRS median [19.5 (18.8-23.5) vs. 12 (9-13.8) p < 0.001] and lower operative time. One morbidity was registered; a patient in the trainees group was readmitted at postoperative day 10 for mechanical ileus that resolved with medical treatment. This study demonstrated transfer of advanced laparoscopic skills acquired through a simulated training program in novice surgical residents to the OR.

  8. The Optimization Based Dynamic and Cyclic Working Strategies for Rechargeable Wireless Sensor Networks with Multiple Base Stations and Wireless Energy Transfer Devices

    PubMed Central

    Ding, Xu; Han, Jianghong; Shi, Lei

    2015-01-01

    In this paper, the optimal working schemes for wireless sensor networks with multiple base stations and wireless energy transfer devices are proposed. The wireless energy transfer devices also work as data gatherers while charging sensor nodes. The wireless sensor network is firstly divided into sub networks according to the concept of Voronoi diagram. Then, the entire energy replenishing procedure is split into the pre-normal and normal energy replenishing stages. With the objective of maximizing the sojourn time ratio of the wireless energy transfer device, a continuous time optimization problem for the normal energy replenishing cycle is formed according to constraints with which sensor nodes and wireless energy transfer devices should comply. Later on, the continuous time optimization problem is reshaped into a discrete multi-phased optimization problem, which yields the identical optimality. After linearizing it, we obtain a linear programming problem that can be solved efficiently. The working strategies of both sensor nodes and wireless energy transfer devices in the pre-normal replenishing stage are also discussed in this paper. The intensive simulations exhibit the dynamic and cyclic working schemes for the entire energy replenishing procedure. Additionally, a way of eliminating “bottleneck” sensor nodes is also developed in this paper. PMID:25785305

  9. The optimization based dynamic and cyclic working strategies for rechargeable wireless sensor networks with multiple base stations and wireless energy transfer devices.

    PubMed

    Ding, Xu; Han, Jianghong; Shi, Lei

    2015-03-16

    In this paper, the optimal working schemes for wireless sensor networks with multiple base stations and wireless energy transfer devices are proposed. The wireless energy transfer devices also work as data gatherers while charging sensor nodes. The wireless sensor network is firstly divided into sub networks according to the concept of Voronoi diagram. Then, the entire energy replenishing procedure is split into the pre-normal and normal energy replenishing stages. With the objective of maximizing the sojourn time ratio of the wireless energy transfer device, a continuous time optimization problem for the normal energy replenishing cycle is formed according to constraints with which sensor nodes and wireless energy transfer devices should comply. Later on, the continuous time optimization problem is reshaped into a discrete multi-phased optimization problem, which yields the identical optimality. After linearizing it, we obtain a linear programming problem that can be solved efficiently. The working strategies of both sensor nodes and wireless energy transfer devices in the pre-normal replenishing stage are also discussed in this paper. The intensive simulations exhibit the dynamic and cyclic working schemes for the entire energy replenishing procedure. Additionally, a way of eliminating "bottleneck" sensor nodes is also developed in this paper.

  10. Image-guided adaptive gating of lung cancer radiotherapy: a computer simulation study

    NASA Astrophysics Data System (ADS)

    Aristophanous, Michalis; Rottmann, Joerg; Park, Sang-June; Nishioka, Seiko; Shirato, Hiroki; Berbeco, Ross I.

    2010-08-01

    The purpose of this study is to investigate the effect that image-guided adaptation of the gating window during treatment could have on the residual tumor motion, by simulating different gated radiotherapy techniques. There are three separate components of this simulation: (1) the 'Hokkaido Data', which are previously measured 3D data of lung tumor motion tracks and the corresponding 1D respiratory signals obtained during the entire ungated radiotherapy treatments of eight patients, (2) the respiratory gating protocol at our institution and the imaging performed under that protocol and (3) the actual simulation in which the Hokkaido Data are used to select tumor position information that could have been collected based on the imaging performed under our gating protocol. We simulated treatments with a fixed gating window and a gating window that is updated during treatment. The patient data were divided into different fractions, each with continuous acquisitions longer than 2 min. In accordance to the imaging performed under our gating protocol, we assume that we have tumor position information for the first 15 s of treatment, obtained from kV fluoroscopy, and for the rest of the fractions the tumor position is only available during the beam-on time from MV imaging. The gating window was set according to the information obtained from the first 15 s such that the residual motion was less than 3 mm. For the fixed gating window technique the gate remained the same for the entire treatment, while for the adaptive technique the range of the tumor motion during beam-on time was measured and used to adapt the gating window to keep the residual motion below 3 mm. The algorithm used to adapt the gating window is described. The residual tumor motion inside the gating window was reduced on average by 24% for the patients with regular breathing patterns and the difference was statistically significant (p-value = 0.01). The magnitude of the residual tumor motion depended on the regularity of the breathing pattern suggesting that image-guided adaptive gating should be combined with breath coaching. The adaptive gating window technique was able to track the exhale position of the breathing cycle quite successfully. Out of a total of 53 fractions the duty cycle was greater than 20% for 42 fractions for the fixed gating window technique and for 39 fractions for the adaptive gating window technique. The results of this study suggest that real-time updating of the gating window can result in reliably low residual tumor motion and therefore can facilitate safe margin reduction.

  11. SWMF Global Magnetosphere Simulations of January 2005: Geomagnetic Indices and Cross-Polar Cap Potential

    NASA Astrophysics Data System (ADS)

    Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.; Morley, Steven K.; Ozturk, Dogacan Su

    2017-12-01

    We simulated the entire month of January 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, SYM-H, AL, and cross-polar cap potential (CPCP). We find that the model does an excellent job of predicting the SYM-H index, with a root-mean-square error (RMSE) of 17-18 nT. Kp is predicted well during storm time conditions but overpredicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonably well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to overpredict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resolution, with the exception of the rate of occurrence for strongly negative AL values. The use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.

  12. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  13. Time-shifted synchronization of chaotic oscillator chains without explicit coupling delays.

    PubMed

    Blakely, Jonathan N; Stahl, Mark T; Corron, Ned J

    2009-12-01

    We examine chains of unidirectionally coupled oscillators in which time-shifted synchronization occurs without explicit delays in the coupling. In numerical simulations and in an experimental system of electronic oscillators, we examine the time shift and the degree of distortion (primarily in the form of attenuation) of the waveforms of the oscillators located far from the drive oscillator. Surprisingly, under weak coupling we observe minimal attenuation in spite of a significant total time shift. In contrast, at higher coupling strengths the observed attenuation increases dramatically and approaches the value predicted by an analytically derived estimate. In this regime, we verify directly that generalized synchronization is maintained over the entire chain length despite severe attenuation. These results suggest that weak coupling generally may produce higher quality synchronization in systems for which truly identical synchronization is not possible.

  14. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  15. Meteorologically Driven Simulations of Dengue Epidemics in San Juan, PR

    PubMed Central

    Morin, Cory W.; Monaghan, Andrew J.; Hayden, Mary H.; Barrera, Roberto; Ernst, Kacey

    2015-01-01

    Meteorological factors influence dengue virus ecology by modulating vector mosquito population dynamics, viral replication, and transmission. Dynamic modeling techniques can be used to examine how interactions among meteorological variables, vectors and the dengue virus influence transmission. We developed a dengue fever simulation model by coupling a dynamic simulation model for Aedes aegypti, the primary mosquito vector for dengue, with a basic epidemiological Susceptible-Exposed-Infectious-Recovered (SEIR) model. Employing a Monte Carlo approach, we simulated dengue transmission during the period of 2010–2013 in San Juan, PR, where dengue fever is endemic. The results of 9600 simulations using varied model parameters were evaluated by statistical comparison (r2) with surveillance data of dengue cases reported to the Centers for Disease Control and Prevention. To identify the most influential parameters associated with dengue virus transmission for each period the top 1% of best-fit model simulations were retained and compared. Using the top simulations, dengue cases were simulated well for 2010 (r2 = 0.90, p = 0.03), 2011 (r2 = 0.83, p = 0.05), and 2012 (r2 = 0.94, p = 0.01); however, simulations were weaker for 2013 (r2 = 0.25, p = 0.25) and the entire four-year period (r2 = 0.44, p = 0.002). Analysis of parameter values from retained simulations revealed that rain dependent container habitats were more prevalent in best-fitting simulations during the wetter 2010 and 2011 years, while human managed (i.e. manually filled) container habitats were more prevalent in best-fitting simulations during the drier 2012 and 2013 years. The simulations further indicate that rainfall strongly modulates the timing of dengue (e.g., epidemics occurred earlier during rainy years) while temperature modulates the annual number of dengue fever cases. Our results suggest that meteorological factors have a time-variable influence on dengue transmission relative to other important environmental and human factors. PMID:26275146

  16. The Virtual Habitat - A tool for dynamic life support system simulations

    NASA Astrophysics Data System (ADS)

    Czupalla, M.; Zhukov, A.; Schnaitmann, J.; Olthoff, C.; Deiml, M.; Plötner, P.; Walter, U.

    2015-06-01

    In this paper we present the Virtual Habitat (V-HAB) model, which simulates on a system level the dynamics of entire mission scenarios for any given life support system (LSS) including a dynamic representation of the crew. We first present the V-HAB architecture. Thereafter we validate in selected case studies the V-HAB submodules. Finally, we demonstrate the overall abilities of V-HAB by first simulating the LSS of the International Space Station (ISS) and showing how close this comes to real data. In a second case study we simulate the LSS dynamics of a Mars mission scenario. We thus show that V-HAB is able to support LSS design processes, giving LSS designers a set of dynamic decision parameters (e.g. stability, robustness, effective crew time) at hand that supplement or even substitute the common Equivalent System Mass (ESM) quantities as a proxy for LSS hardware costs. The work presented here builds on a LSS heritage by the exploration group at the Technical University at Munich (TUM) dating from even before 2006.

  17. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  18. Unsteady Turbopump Flow Simulations

    NASA Technical Reports Server (NTRS)

    Centin, Kiris C.; Kwak, Dochan

    2001-01-01

    The objective of the current effort is two-fold: 1) to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine; and 2) to provide high-fidelity unsteady turbopump flow analysis capability to support the design of pump sub-systems for advanced space transportation vehicle. Since the space launch systems in the near future are likely to involve liquid propulsion system, increasing the efficiency and reliability of the turbopump components is an important task. To date, computational tools for design/analysis of turbopump flow are based on relatively lower fidelity methods. Unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available, at least, for real-world engineering applications. Present effort is an attempt to provide this capability so that developers of the vehicle will be able to extract such information as transient flow phenomena for start up, impact of non-uniform inflow, system vibration and impact on the structure. Those quantities are not readily available from simplified design tools. In this presentation, the progress being made toward complete turbo-pump simulation capability for a liquid rocket engine is reported. Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for the performance evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbopump, which contains 106 zones with 34.5 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability and the performance of the parallel versions of the code will be presented.

  19. i-rDNA: alignment-free algorithm for rapid in silico detection of ribosomal gene fragments from metagenomic sequence data sets.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Chadaram, Sudha; Mande, Sharmila S

    2011-11-30

    Obtaining accurate estimates of microbial diversity using rDNA profiling is the first step in most metagenomics projects. Consequently, most metagenomic projects spend considerable amounts of time, money and manpower for experimentally cloning, amplifying and sequencing the rDNA content in a metagenomic sample. In the second step, the entire genomic content of the metagenome is extracted, sequenced and analyzed. Since DNA sequences obtained in this second step also contain rDNA fragments, rapid in silico identification of these rDNA fragments would drastically reduce the cost, time and effort of current metagenomic projects by entirely bypassing the experimental steps of primer based rDNA amplification, cloning and sequencing. In this study, we present an algorithm called i-rDNA that can facilitate the rapid detection of 16S rDNA fragments from amongst millions of sequences in metagenomic data sets with high detection sensitivity. Performance evaluation with data sets/database variants simulating typical metagenomic scenarios indicates the significantly high detection sensitivity of i-rDNA. Moreover, i-rDNA can process a million sequences in less than an hour on a simple desktop with modest hardware specifications. In addition to the speed of execution, high sensitivity and low false positive rate, the utility of the algorithmic approach discussed in this paper is immense given that it would help in bypassing the entire experimental step of primer-based rDNA amplification, cloning and sequencing. Application of this algorithmic approach would thus drastically reduce the cost, time and human efforts invested in all metagenomic projects. A web-server for the i-rDNA algorithm is available at http://metagenomics.atc.tcs.com/i-rDNA/

  20. Analyzing survival curves at a fixed point in time for paired and clustered right-censored data

    PubMed Central

    Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De

    2018-01-01

    In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280

  1. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  2. The Importance of Artificial Intelligence for Naval Intelligence Training Simulations

    DTIC Science & Technology

    2006-09-01

    experimental investigation described later. B. SYSTEM ARCHITECTURE The game-based simulator was created using NetBeans , which is an open source integrated...development environment (IDE) written entirely in Java using the NetBeans Platform. NetBeans is based upon the Java language which contains the...involved within the simulation are conducted in a GUI built within the NetBeans IDE. The opening display allows the user to setup the simulation

  3. Exploring total cardiac variability in healthy and pathophysiological subjects using improved refined multiscale entropy.

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2017-02-01

    Multiscale entropy (MSE) and refined multiscale entropy (RMSE) techniques are being widely used to evaluate the complexity of a time series across multiple time scales 't'. Both these techniques, at certain time scales (sometimes for the entire time scales, in the case of RMSE), assign higher entropy to the HRV time series of certain pathologies than that of healthy subjects, and to their corresponding randomized surrogate time series. This incorrect assessment of signal complexity may be due to the fact that these techniques suffer from the following limitations: (1) threshold value 'r' is updated as a function of long-term standard deviation and hence unable to explore the short-term variability as well as substantial variability inherited in beat-to-beat fluctuations of long-term HRV time series. (2) In RMSE, entropy values assigned to different filtered scaled time series are the result of changes in variance, but do not completely reflect the real structural organization inherited in original time series. In the present work, we propose an improved RMSE (I-RMSE) technique by introducing a new procedure to set the threshold value by taking into account the period-to-period variability inherited in a signal and evaluated it on simulated and real HRV database. The proposed I-RMSE assigns higher entropy to the age-matched healthy subjects than that of patients suffering from atrial fibrillation, congestive heart failure, sudden cardiac death and diabetes mellitus, for the entire time scales. The results strongly support the reduction in complexity of HRV time series in female group, old-aged, patients suffering from severe cardiovascular and non-cardiovascular diseases, and in their corresponding surrogate time series.

  4. Simulation of Magnetic Cloud Erosion and Deformation During Propagation

    NASA Astrophysics Data System (ADS)

    Manchester, W.; Kozyra, J. U.; Lepri, S. T.; Lavraud, B.; Jackson, B. V.

    2013-12-01

    We examine a three-dimensional (3-D) numerical magnetohydrodynamic (MHD) simulation describing a very fast interplanetary coronal mass ejection (ICME) propagating from the solar corona to 1 AU. In conjunction with it's high speed, the ICME evolves in ways that give it a unique appearance at 1AU that does not resemble a typical ICME. First, as the ICME decelerates in the solar wind, filament material at the back of the flux rope pushes its way forward through the flux rope. Second, diverging nonradial flows in front of the filament transport azimuthal flux of the rope to the sides of the ICME. Third, the magnetic flux rope reconnects with the interplanetary magnetic field (IMF). As a consequence of these processes, the flux rope partially unravels and appears to evolve to an entirely open configuration near its nose. At the same time, filament material at the base of the flux rope moves forward and comes in direct contact with the shocked plasma in the CME sheath. We find evidence such remarkable behavior has occurred when we examine a very fast CME that erupted from the Sun on 2005 January 20. In situ observations of this event near 1 AU show very dense cold material impacting the Earth following immediately behind the CME sheath. Charge state analysis shows this dense plasma is filament material, and the analysis of SMEI data provides the trajectory of this dense plasma from the Sun. Consistent with the simulation, we find the azimuthal flux (Bz) to be entirely unbalanced giving the appearance that the flux rope has completely eroded on the anti-sunward side.

  5. High-Alpha Research Vehicle Lateral-Directional Control Law Description, Analyses, and Simulation Results

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Murphy, Patrick C.; Lallman, Frederick J.; Hoffler, Keith D.; Bacon, Barton J.

    1998-01-01

    This report contains a description of a lateral-directional control law designed for the NASA High-Alpha Research Vehicle (HARV). The HARV is a F/A-18 aircraft modified to include a research flight computer, spin chute, and thrust-vectoring in the pitch and yaw axes. Two separate design tools, CRAFT and Pseudo Controls, were integrated to synthesize the lateral-directional control law. This report contains a description of the lateral-directional control law, analyses, and nonlinear simulation (batch and piloted) results. Linear analysis results include closed-loop eigenvalues, stability margins, robustness to changes in various plant parameters, and servo-elastic frequency responses. Step time responses from nonlinear batch simulation are presented and compared to design guidelines. Piloted simulation task scenarios, task guidelines, and pilot subjective ratings for the various maneuvers are discussed. Linear analysis shows that the control law meets the stability margin guidelines and is robust to stability and control parameter changes. Nonlinear batch simulation analysis shows the control law exhibits good performance and meets most of the design guidelines over the entire range of angle-of-attack. This control law (designated NASA-1A) was flight tested during the Summer of 1994 at NASA Dryden Flight Research Center.

  6. CaloGAN: Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks

    NASA Astrophysics Data System (ADS)

    Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin

    2018-01-01

    The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter and achieve speedup factors comparable to or better than existing full simulation techniques on CPU (100 ×-1000 × ) and even faster on GPU (up to ˜105× ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons, and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.

  7. A New Estimate of North American Mountain Snow Accumulation From Regional Climate Model Simulations

    NASA Astrophysics Data System (ADS)

    Wrzesien, Melissa L.; Durand, Michael T.; Pavelsky, Tamlin M.; Kapnick, Sarah B.; Zhang, Yu; Guo, Junyi; Shum, C. K.

    2018-02-01

    Despite the importance of mountain snowpack to understanding the water and energy cycles in North America's montane regions, no reliable mountain snow climatology exists for the entire continent. We present a new estimate of mountain snow water equivalent (SWE) for North America from regional climate model simulations. Climatological peak SWE in North America mountains is 1,006 km3, 2.94 times larger than previous estimates from reanalyses. By combining this mountain SWE value with the best available global product in nonmountain areas, we estimate peak North America SWE of 1,684 km3, 55% greater than previous estimates. In our simulations, the date of maximum SWE varies widely by mountain range, from early March to mid-April. Though mountains comprise 24% of the continent's land area, we estimate that they contain 60% of North American SWE. This new estimate is a suitable benchmark for continental- and global-scale water and energy budget studies.

  8. Finite Element Simulation of Residual Stress Development in Thermally Sprayed Coatings

    NASA Astrophysics Data System (ADS)

    Elhoriny, Mohamed; Wenzelburger, Martin; Killinger, Andreas; Gadow, Rainer

    2017-04-01

    The coating buildup process of Al2O3/TiO2 ceramic powder deposited on stainless-steel substrate by atmospheric plasma spraying has been simulated by creating thermomechanical finite element models that utilize element death and birth techniques in ANSYS commercial software and self-developed codes. The simulation process starts with side-by-side deposition of coarse subparts of the ceramic layer until the entire coating is created. Simultaneously, the heat flow into the material, thermal deformation, and initial quenching stress are computed. The aim is to be able to predict—for the considered spray powder and substrate material—the development of residual stresses and to assess the risk of coating failure. The model allows the prediction of the heat flow, temperature profile, and residual stress development over time and position in the coating and substrate. The proposed models were successfully run and the results compared with actual residual stresses measured by the hole drilling method.

  9. A Generic Multibody Parachute Simulation Model

    NASA Technical Reports Server (NTRS)

    Neuhaus, Jason Richard; Kenney, Patrick Sean

    2006-01-01

    Flight simulation of dynamic atmospheric vehicles with parachute systems is a complex task that is not easily modeled in many simulation frameworks. In the past, the performance of vehicles with parachutes was analyzed by simulations dedicated to parachute operations and were generally not used for any other portion of the vehicle flight trajectory. This approach required multiple simulation resources to completely analyze the performance of the vehicle. Recently, improved software engineering practices and increased computational power have allowed a single simulation to model the entire flight profile of a vehicle employing a parachute.

  10. A Collection of Nonlinear Aircraft Simulations in MATLAB

    NASA Technical Reports Server (NTRS)

    Garza, Frederico R.; Morelli, Eugene A.

    2003-01-01

    Nonlinear six degree-of-freedom simulations for a variety of aircraft were created using MATLAB. Data for aircraft geometry, aerodynamic characteristics, mass / inertia properties, and engine characteristics were obtained from open literature publications documenting wind tunnel experiments and flight tests. Each nonlinear simulation was implemented within a common framework in MATLAB, and includes an interface with another commercially-available program to read pilot inputs and produce a three-dimensional (3-D) display of the simulated airplane motion. Aircraft simulations include the General Dynamics F-16 Fighting Falcon, Convair F-106B Delta Dart, Grumman F-14 Tomcat, McDonnell Douglas F-4 Phantom, NASA Langley Free-Flying Aircraft for Sub-scale Experimental Research (FASER), NASA HL-20 Lifting Body, NASA / DARPA X-31 Enhanced Fighter Maneuverability Demonstrator, and the Vought A-7 Corsair II. All nonlinear simulations and 3-D displays run in real time in response to pilot inputs, using contemporary desktop personal computer hardware. The simulations can also be run in batch mode. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. Since all the nonlinear simulations are implemented entirely in MATLAB, user-defined control laws can be added in a straightforward fashion, and the simulations are portable across various computing platforms. Routines for trim, linearization, and numerical integration are included. The general nonlinear simulation framework and the specifics for each particular aircraft are documented.

  11. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  12. [Simulating the effects of climate change and fire disturbance on aboveground biomass of boreal forests in the Great Xing'an Mountains, Northeast China].

    PubMed

    Luo, Xu; Wang, Yu Li; Zhang, Jin Quan

    2018-03-01

    Predicting the effects of climate warming and fire disturbance on forest aboveground biomass is a central task of studies in terrestrial ecosystem carbon cycle. The alteration of temperature, precipitation, and disturbance regimes induced by climate warming will affect the carbon dynamics of forest ecosystem. Boreal forest is an important forest type in China, the responses of which to climate warming and fire disturbance are increasingly obvious. In this study, we used a forest landscape model LANDIS PRO to simulate the effects of climate change on aboveground biomass of boreal forests in the Great Xing'an Mountains, and compared direct effects of climate warming and the effects of climate warming-induced fires on forest aboveground biomass. The results showed that the aboveground biomass in this area increased under climate warming scenarios and fire disturbance scenarios with increased intensity. Under the current climate and fire regime scenario, the aboveground biomass in this area was (97.14±5.78) t·hm -2 , and the value would increase up to (97.93±5.83) t·hm -2 under the B1F2 scenario. Under the A2F3 scenario, aboveground biomass at landscape scale was relatively higher at the simulated periods of year 100-150 and year 150-200, and the value were (100.02±3.76) t·hm -2 and (110.56±4.08) t·hm -2 , respectively. Compared to the current fire regime scenario, the predicted biomass at landscape scale was increased by (0.56±1.45) t·hm -2 under the CF2 scenario (fire intensity increased by 30%) at some simulated periods, and the aboveground biomass was reduced by (7.39±1.79) t·hm -2 in CF3 scenario (fire intensity increased by 230%) at the entire simulation period. There were significantly different responses between coniferous and broadleaved species under future climate warming scenarios, in that the simulated biomass for both Larix gmelinii and Betula platyphylla showed decreasing trend with climate change, whereas the simulated biomass for Pinus sylvestris var. mongolica, Picea koraiensis and Populus davidiana showed increasing trend at different degrees during the entire simulation period. There was a time lag for the direct effect of climate warming on biomass for coniferous and broadleaved species. The response time of coniferous species to climate warming was 25-30 years, which was longer than that for broadleaf species. The forest landscape in the Great Xing'an Mountains was sensitive to the interactive effect of climate warming (high CO 2 emissions) and high intensity fire disturbance. Future climate warming and high intensity forest fire disturbance would significantly change the composition and structure of forest ecosystem.

  13. Numerical Study of the Plasticity-Induced Stabilization Effect on Martensitic Transformations in Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Junker, Philipp; Hempel, Philipp

    2017-12-01

    It is well known that plastic deformations in shape memory alloys stabilize the martensitic phase. Furthermore, the knowledge concerning the plastic state is crucial for a reliable sustainability analysis of construction parts. Numerical simulations serve as a tool for the realistic investigation of the complex interactions between phase transformations and plastic deformations. To account also for irreversible deformations, we expand an energy-based material model by including a non-linear isotropic hardening plasticity model. An implementation of this material model into commercial finite element programs, e.g., Abaqus, offers the opportunity to analyze entire structural components at low costs and fast computation times. Along with the theoretical derivation and expansion of the model, several simulation results for various boundary value problems are presented and interpreted for improved construction designing.

  14. Simulation of the August 21, 2017 Solar Eclipse Using the Whole Atmosphere Community Climate Model - eXtended (WACCM-X)

    NASA Astrophysics Data System (ADS)

    McInerney, J. M.; Liu, H.; Marsh, D. R.; Solomon, S. C.; Vitt, F.; Conley, A. J.

    2017-12-01

    The total solar eclipse of August 21, 2017 transited the entire continental United States. This presented an opportunity for model simulation of eclipse effects on the lower atmosphere, upper atmosphere, and ionosphere. The Community Earth System Model (CESM), v2.0, now includes a functional version of the Whole Atmosphere Community Climate Model - eXtended (WACCM-X) that has a fully interactive ionosphere and thermosphere. WACCM-X, with a model top up to 700 kilometers, is an atmospheric component of CESM and is being developed at the National Center for Atmospheric Research in Boulder, Colorado. Here we present results from simulations using this model during a total solar eclipse. This not only gives insights into the effects of the eclipse through the entire atmosphere from the surface through the ionosphere/thermosphere, but also serves as a validation tool for the model.

  15. Solar Magnetic Carpet III: Coronal Modelling of Synthetic Magnetograms

    NASA Astrophysics Data System (ADS)

    Meyer, K. A.; Mackay, D. H.; van Ballegooijen, A. A.; Parnell, C. E.

    2013-09-01

    This article is the third in a series working towards the construction of a realistic, evolving, non-linear force-free coronal-field model for the solar magnetic carpet. Here, we present preliminary results of 3D time-dependent simulations of the small-scale coronal field of the magnetic carpet. Four simulations are considered, each with the same evolving photospheric boundary condition: a 48-hour time series of synthetic magnetograms produced from the model of Meyer et al. ( Solar Phys. 272, 29, 2011). Three simulations include a uniform, overlying coronal magnetic field of differing strength, the fourth simulation includes no overlying field. The build-up, storage, and dissipation of magnetic energy within the simulations is studied. In particular, we study their dependence upon the evolution of the photospheric magnetic field and the strength of the overlying coronal field. We also consider where energy is stored and dissipated within the coronal field. The free magnetic energy built up is found to be more than sufficient to power small-scale, transient phenomena such as nanoflares and X-ray bright points, with the bulk of the free energy found to be stored low down, between 0.5 - 0.8 Mm. The energy dissipated is currently found to be too small to account for the heating of the entire quiet-Sun corona. However, the form and location of energy-dissipation regions qualitatively agree with what is observed on small scales on the Sun. Future MHD modelling using the same synthetic magnetograms may lead to a higher energy release.

  16. Numerical Simulation Of Silicon-Ribbon Growth

    NASA Technical Reports Server (NTRS)

    Woda, Ben K.; Kuo, Chin-Po; Utku, Senol; Ray, Sujit Kumar

    1987-01-01

    Mathematical model includes nonlinear effects. In development simulates growth of silicon ribbon from melt. Takes account of entire temperature and stress history of ribbon. Numerical simulations performed with new model helps in search for temperature distribution, pulling speed, and other conditions favoring growth of wide, flat, relatively defect-free silicon ribbons for solar photovoltaic cells at economically attractive, high production rates. Also applicable to materials other than silicon.

  17. Data on the mixing of non-Newtonian fluids by a Rushton turbine in a cylindrical tank.

    PubMed

    Khapre, Akhilesh; Munshi, Basudeb

    2016-09-01

    The paper focuses on the data collected from the mixing of shear thinning non-Newtonian fluids in a cylindrical tank by a Rushton turbine. The data presented are obtained by using Computational Fluid Dynamics (CFD) simulation of fluid flow field in the entire tank volume. The CFD validation data for this study is reported in the research article 'Numerical investigation of hydrodynamic behavior of shear thinning fluids in stirred tank' (Khapre and Munshi, 2015) [1]. The tracer injection method is used for the prediction of mixing time and mixing efficiency of a Rushton turbine impeller.

  18. Theory and Circuit Model for Lossy Coaxial Transmission Line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genoni, T. C.; Anderson, C. N.; Clark, R. E.

    2017-04-01

    The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.

  19. Visual skills in airport-security screening.

    PubMed

    McCarley, Jason S; Kramer, Arthur F; Wickens, Christopher D; Vidoni, Eric D; Boot, Walter R

    2004-05-01

    An experiment examined visual performance in a simulated luggage-screening task. Observers participated in five sessions of a task requiring them to search for knives hidden in x-ray images of cluttered bags. Sensitivity and response times improved reliably as a result of practice. Eye movement data revealed that sensitivity increases were produced entirely by changes in observers' ability to recognize target objects, and not by changes in the effectiveness of visual scanning. Moreover, recognition skills were in part stimulus-specific, such that performance was degraded by the introduction of unfamiliar target objects. Implications for screener training are discussed.

  20. An improved cylindrical FDTD method and its application to field-tissue interaction study in MRI.

    PubMed

    Chi, Jieru; Liu, Feng; Xia, Ling; Shao, Tingting; Mason, David G; Crozier, Stuart

    2010-01-01

    This paper presents a three dimensional finite-difference time-domain (FDTD) scheme in cylindrical coordinates with an improved algorithm for accommodating the numerical singularity associated with the polar axis. The regularization of this singularity problem is entirely based on Ampere's law. The proposed algorithm has been detailed and verified against a problem with a known solution obtained from a commercial electromagnetic simulation package. The numerical scheme is also illustrated by modeling high-frequency RF field-human body interactions in MRI. The results demonstrate the accuracy and capability of the proposed algorithm.

  1. Broadband ground-motion simulation using a hybrid approach

    USGS Publications Warehouse

    Graves, R.W.; Pitarka, A.

    2010-01-01

    This paper describes refinements to the hybrid broadband ground-motion simulation methodology of Graves and Pitarka (2004), which combines a deterministic approach at low frequencies (f 1 Hz). In our approach, fault rupture is represented kinematically and incorporates spatial heterogeneity in slip, rupture speed, and rise time. The prescribed slip distribution is constrained to follow an inverse wavenumber-squared fall-off and the average rupture speed is set at 80% of the local shear-wave velocity, which is then adjusted such that the rupture propagates faster in regions of high slip and slower in regions of low slip. We use a Kostrov-like slip-rate function having a rise time proportional to the square root of slip, with the average rise time across the entire fault constrained empirically. Recent observations from large surface rupturing earthquakes indicate a reduction of rupture propagation speed and lengthening of rise time in the near surface, which we model by applying a 70% reduction of the rupture speed and increasing the rise time by a factor of 2 in a zone extending from the surface to a depth of 5 km. We demonstrate the fidelity of the technique by modeling the strong-motion recordings from the Imperial Valley, Loma Prieta, Landers, and Northridge earthquakes.

  2. Generation of Plausible Hurricane Tracks for Preparedness Exercises

    DTIC Science & Technology

    2017-04-25

    wind extents are simulated by Poisson regression and temporal filtering . The un-optimized MATLAB code runs in less than a minute and is integrated into...of real hurricanes. After wind radii have been simulated for the entire track, median filtering , attenuation over land, and smoothing clean up the wind

  3. A Review of the Literature on Training Simulators: Translators: Transfer of Training and Simulator Fidelity.

    DTIC Science & Technology

    1984-04-01

    Noise is distracting especially in complex tasks that require close attention and concentration (Finkelman 1975). Improper lighting (Tinker 1943...before coping with . the entire systemi. However, the functional fidelity may be affected due to the isolation of a £ articular subsystem. Curry (1981

  4. Study of intermolecular contacts in the proline-rich homeodomain (PRH)-DNA complex using molecular dynamics simulations.

    PubMed

    Jalili, Seifollah; Karami, Leila

    2012-03-01

    The proline-rich homeodomain (PRH)-DNA complex consists of a protein with 60 residues and a 13-base-pair DNA. The PRH protein is a transcription factor that plays a key role in the regulation of gene expression. PRH is a significant member of the Q50 class of homeodomain proteins. The homeodomain section of PRH is essential for binding to DNA and mediates sequence-specific DNA binding. Three 20-ns molecular dynamics (MD) simulations (free protein, free DNA and protein-DNA complex) in explicit solvent water were performed to elucidate the intermolecular contacts in the PRH-DNA complex and the role of dynamics of water molecules forming water-mediated contacts. The simulation provides a detailed explanation of the trajectory of hydration water molecules. The simulations show that some water molecules in the protein-DNA interface exchange with bulk waters. The simulation identifies that most of the contacts consisted of direct interactions between the protein and DNA including specific and non-specific contacts, but several water-mediated polar contacts were also observed. The specific interaction between Gln50 and C18 and water-mediated hydrogen bond between Gln50 and T7 were found to be present during almost the entire time of the simulation. These results show good consistency with experimental and previous computational studies. Structural properties such as root-mean-square deviations (RMSD), root-mean-square fluctuations (RMSF) and secondary structure were also analyzed as a function of time. Analyses of the trajectories showed that the dynamic fluctuations of both the protein and the DNA were lowered by the complex formation.

  5. Stochastic generation of hourly rainstorm events in Johor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli

    2015-02-03

    Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was usedmore » in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.« less

  6. Sound field reconstruction within an entire cavity by plane wave expansions using a spherical microphone array.

    PubMed

    Wang, Yan; Chen, Kean

    2017-10-01

    A spherical microphone array has proved effective in reconstructing an enclosed sound field by a superposition of spherical wave functions in Fourier domain. It allows successful reconstructions surrounding the array, but the accuracy will be degraded at a distance. In order to extend the effective reconstruction to the entire cavity, a plane-wave basis in space domain is used owing to its non-decaying propagating characteristic and compared with the conventional spherical wave function method in a low frequency sound field within a cylindrical cavity. The sensitivity to measurement noise, the effects of the numbers of plane waves, and measurement positions are discussed. Simulations show that under the same measurement conditions, the plane wave function method is superior in terms of reconstruction accuracy and data processing efficiency, that is, the entire sound field imaging can be achieved by only one time calculation instead of translations of local sets of coefficients with respect to every measurement position into a global one. An experiment was conducted inside an aircraft cabin mock-up for validation. Additionally, this method provides an alternative possibility to recover the coefficients of high order spherical wave functions in a global coordinate system without coordinate translations with respect to local origins.

  7. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  8. Time-reversal MUSIC imaging of extended targets.

    PubMed

    Marengo, Edwin A; Gruber, Fred K; Simonetti, Francesco

    2007-08-01

    This paper develops, within a general framework that is applicable to rather arbitrary electromagnetic and acoustic remote sensing systems, a theory of time-reversal "MUltiple Signal Classification" (MUSIC)-based imaging of extended (nonpoint-like) scatterers (targets). The general analysis applies to arbitrary remote sensing geometry and sheds light onto how the singular system of the scattering matrix relates to the geometrical and propagation characteristics of the entire transmitter-target-receiver system and how to use this effect for imaging. All the developments are derived within exact scattering theory which includes multiple scattering effects. The derived time-reversal MUSIC methods include both interior sampling, as well as exterior sampling (or enclosure) approaches. For presentation simplicity, particular attention is given to the time-harmonic case where the informational wave modes employed for target interrogation are purely spatial, but the corresponding generalization to broadband fields is also given. This paper includes computer simulations illustrating the derived theory and algorithms.

  9. Simulation of rotor blade element turbulence

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.; Duisenberg, Ken

    1995-01-01

    A piloted, motion-based simulation of Sikorsky's Black Hawk helicopter was used as a platform for the investigation of rotorcraft responses to vertical turbulence. By using an innovative temporal and geometrical distribution algorithm that preserved the statistical characteristics of the turbulence over the rotor disc, stochastic velocity components were applied at each of twenty blade-element stations. This model was implemented on NASA Ames' Vertical Motion Simulator (VMS), and ten test pilots were used to establish that the model created realistic cues. The objectives of this research included the establishment of a simulation-technology basis for future investigation into real-time turbulence modeling. This goal was achieved; our extensive additions to the rotor model added less than a 10 percent computational overhead. Using a VAX 9000 computer the entire simulation required a cycle time of less than 12 msec. Pilot opinion during this simulation was generally quite favorable. For low speed flight the consensus was that SORBET (acronym for title) was better than the conventional body-fixed model, which was used for comparison purposes, and was determined to be too violent (like a washboard). For high speed flight the pilots could not identify differences between these models. These opinions were something of a surprise because only the vertical turbulence component on the rotor system was implemented in SORBET. Because of the finite-element distribution of the inputs, induced outputs were observed in all translational and rotational axes. Extensive post-simulation spectral analyses of the SORBET model suggest that proper rotorcraft turbulence modeling requires that vertical atmospheric disturbances not be superimposed at the vehicle center of gravity but, rather, be input into the rotor system, where the rotor-to-body transfer function severely attenuates high frequency rotorcraft responses.

  10. Snake fangs: 3D morphological and mechanical analysis by microCT, simulation, and physical compression testing.

    PubMed

    du Plessis, Anton; Broeckhoven, Chris; le Roux, Stephan G

    2018-01-01

    This Data Note provides data from an experimental campaign to analyse the detailed internal and external morphology and mechanical properties of venomous snake fangs. The aim of the experimental campaign was to investigate the evolutionary development of 3 fang phenotypes and investigate their mechanical behaviour. The study involved the use of load simulations to compare maximum Von Mises stress values when a load is applied to the tip of the fang. The conclusions of this study have been published elsewhere, but in this data note we extend the analysis, providing morphological comparisons including details such as curvature comparisons, thickness, etc. Physical compression results of individual fangs, though reported in the original paper, were also extended here by calculating the effective elastic modulus of the entire snake fang structure including internal cavities for the first time. This elastic modulus of the entire fang is significantly lower than the locally measured values previously reported from indentation experiments, highlighting the possibility that the elastic modulus is higher on the surface than in the rest of the material. The micro-computed tomography (microCT) data are presented both in image stacks and in the form of STL files, which simplifies the handling of the data and allows its re-use for future morphological studies. These fangs might also serve as bio-inspiration for future hypodermic needles. © The Author 2017. Published by Oxford University Press.

  11. Valles Marineris as a Cryokarstic Structure Formed by a Giant Dyke System: Support From New Analogue Experiments

    NASA Astrophysics Data System (ADS)

    Ozeren, M. S.; Sengor, A. M. C.; Acar, D.; Ülgen, S. C.; Onsel, I. E.

    2014-12-01

    Valles Marineris is the most significant near-linear depression on Mars. It is some 4000 km long, up to about 200 km wide and some 7 km deep. Although its margins look parallel at first sight, the entire structure has a long spindle shape with significant enlargement in its middle (Melas Chasma) caused by cuspate slope retreat mechanisms. Farther to its north is Hebes Chasma which is an entirely closed depression with a more pronounced spindle shape. Tithonium Chasma is a parallel, but much narrower depression to its northeast. All these chasmae have axes parallel with one another and such structures occur nowhere else on Mars. A scabland surface exists to the east of the Valles Marineris and the causative water mass seems to have issued from it. The great resemblance of these chasmae on mars to poljes in the karstic regions on earth have led us to assume that they owed their existence to dissolution of rock layers underlying them. We assumed that the dissolving layer consisted of water ice forming substantial layers, in fact entirely frozen seas of several km depth. We have simulated this geometry by using bentonite and flour layers (in different experiments) overlying layers of ice in which a resistant coil was used to simulate a dyke. We used different thicknesses of bentonite and flour overlying ice layers again of various thicknesses. The flour seems to simulate the Martian crust better because on Mars, g is only about 3/8ths of its value on Earth, so (for equal crustal density) the depth to which the cohesion term C remains important in the Mohr-Coulomb shear failure criterion is about 8/3 times greater. As examples we show two of those experiments in which both the rock analogue and ice layers were of 1.5 cm. thick. Perfect analogues of the Valles Marineris formed above the dyke analogue thermal source complete with the near-linear structure, overall flat spindle shape, cuspate margins, a central ridge, parallel side faults, parallel depressions resembling the Tithonium Chasma. When water was allowed to drain from the beginning, closed depressions formed that have an amazing resemblance to Hebes chasma. We postulate that the entire system of chasmae here discussed formed atop a major dyke swarm some 4000 km length, not dissimilar to the 3500 km long Mesoproterozoic (Ectasian) dyke swarm disrupting the Canadian Shield.

  12. A study of the kinetic energy generation with general circulation models

    NASA Technical Reports Server (NTRS)

    Chen, T.-C.; Lee, Y.-H.

    1983-01-01

    The history data of winter simulation by the GLAS climate model and the NCAR community climate model are used to examine the generation of atmospheric kinetic energy. The contrast between the geographic distributions of the generation of kinetic energy and divergence of kinetic energy flux shows that kinetic energy is generated in the upstream side of jets, transported to the downstream side and destroyed there. The contributions from the time-mean and transient modes to the counterbalance between generation of kinetic energy and divergence of kinetic energy flux are also investigated. It is observed that the kinetic energy generated by the time-mean mode is essentially redistributed by the time-mean flow, while that generated by the transient flow is mainly responsible for the maintenance of the kinetic energy of the entire atmospheric flow.

  13. A parallel algorithm for the initial screening of space debris collisions prediction using the SGP4/SDP4 models and GPU acceleration

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-05-01

    Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.

  14. An Equivalent cross-section Framework for improving computational efficiency in Distributed Hydrologic Modelling

    NASA Astrophysics Data System (ADS)

    Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish

    2014-05-01

    While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.

  15. Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.; Bergman, R. O.; Bonstrom, D. B.; Brinkman, T. W.; Chiu, S. H. J.; Green, S. S.; Hansen, S. D.; Klein, D. L.; Krohn, H. E.; Prow, R. P.

    1979-01-01

    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation.

  16. Comments on ``Use of conditional simulation in nuclear waste site performance assessment`` by Carol Gotway

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downing, D.J.

    1993-10-01

    This paper discusses Carol Gotway`s paper, ``The Use of Conditional Simulation in Nuclear Waste Site Performance Assessment.`` The paper centers on the use of conditional simulation and the use of geostatistical methods to simulate an entire field of values for subsequent use in a complex computer model. The issues of sampling designs for geostatistics, semivariogram estimation and anisotropy, turning bands method for random field generation, and estimation of the comulative distribution function are brought out.

  17. Simulation of a microgrid

    NASA Astrophysics Data System (ADS)

    Dulǎu, Lucian Ioan

    2015-12-01

    This paper describes the simulation of a microgrid system with storage technologies. The microgrid comprises 6 distributed generators (DGs), 3 loads and a 150 kW storage unit. The installed capacity of the generators is 1100 kW, while the total load demand is 900 kW. The simulation is performed by using a SCADA software, considering the power generation costs, the loads demand and the system's power losses. The generators access the system in order of their power generation cost. The simulation is performed for the entire day.

  18. Case studies on design, simulation and visualization of control and measurement applications using REX control system

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    REX Control System is a professional advanced tool for design and implementation of complex control systems that belongs to softPLC category. It covers the entire process starting from simulation of functionality of the application before deployment, through implementation on real-time target, towards analysis, diagnostics and visualization. Basically it consists of two parts: the development tools and the runtime system. It is also compatible with Simulink environment, and the way of implementation of control algorithm is very similar. The control scheme is finally compiled (using RexDraw utility) and uploaded into a chosen real-time target (using RexView utility). There is a wide variety of hardware platforms and real-time operating systems supported by REX Control System such as for example Windows Embedded, Linux, Linux/Xenomai deployed on SBC, IPC, PAC, Raspberry Pi and others with many I/O interfaces. It is modern system designed both for measurement and control applications, offering a lot of additional functions concerning data archiving, visualization based on HTML5, and communication standards. The paper will sum up possibilities of its use in educational process, focused on control of case studies of physical models with classical and advanced control algorithms.

  19. Optimal nodal flyby with near-Earth asteroids using electric sail

    NASA Astrophysics Data System (ADS)

    Mengali, Giovanni; Quarta, Alessandro A.

    2014-11-01

    The aim of this paper is to quantify the performance of an Electric Solar Wind Sail for accomplishing flyby missions toward one of the two orbital nodes of a near-Earth asteroid. Assuming a simplified, two-dimensional mission scenario, a preliminary mission analysis has been conducted involving the whole known population of those asteroids at the beginning of the 2013 year. The analysis of each mission scenario has been performed within an optimal framework, by calculating the minimum-time trajectory required to reach each orbital node of the target asteroid. A considerable amount of simulation data have been collected, using the spacecraft characteristic acceleration as a parameter to quantify the Electric Solar Wind Sail propulsive performance. The minimum time trajectory exhibits a different structure, which may or may not include a solar wind assist maneuver, depending both on the Sun-node distance and the value of the spacecraft characteristic acceleration. Simulations show that over 60% of near-Earth asteroids can be reached with a total mission time less than 100 days, whereas the entire population can be reached in less than 10 months with a spacecraft characteristic acceleration of 1 mm/s2.

  20. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    PubMed Central

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  1. Revisiting the horizontal redistribution of water in soils: Experiments and numerical modeling.

    PubMed

    Zhuang, L; Hassanizadeh, S M; Kleingeld, P J; van Genuchten, M Th

    2017-09-01

    A series of experiments and related numerical simulations were carried out to study one-dimensional water redistribution processes in an unsaturated soil. A long horizontal Plexiglas box was packed as homogenously as possible with sand. The sandbox was divided into two sections using a very thin metal plate, with one section initially fully saturated and the other section only partially saturated. Initial saturation in the dry section was set to 0.2, 0.4, or 0.6 in three different experiments. Redistribution between the wet and dry sections started as soon as the metal plate was removed. Changes in water saturation at various locations along the sandbox were measured as a function of time using a dual-energy gamma system. Also, air and water pressures were measured using two different kinds of tensiometers at various locations as a function of time. The saturation discontinuity was found to persist during the entire experiments, while observed water pressures were found to become continuous immediately after the experiments started. Two models, the standard Richards equation and an interfacial area model, were used to simulate the experiments. Both models showed some deviations between the simulated water pressures and the measured data at early times during redistribution. The standard model could only simulate the observed saturation distributions reasonably well for the experiment with the lowest initial water saturation in the dry section. The interfacial area model could reproduce observed saturation distributions of all three experiments, albeit by fitting one of the parameters in the surface area production term.

  2. Linking long-term planetary N-body simulations with periodic orbits: application to white dwarf pollution

    NASA Astrophysics Data System (ADS)

    Antoniadou, Kyriaki I.; Veras, Dimitri

    2016-12-01

    Mounting discoveries of debris discs orbiting newly formed stars and white dwarfs (WDs) showcase the importance of modelling the long-term evolution of small bodies in exosystems. WD debris discs are, in particular, thought to form from very long-term (0.1-5.0 Gyr) instability between planets and asteroids. However, the time-consuming nature of N-body integrators which accurately simulate motion over Gyrs necessitates a judicious choice of initial conditions. The analytical tools known as periodic orbits can circumvent the guesswork. Here, we begin a comprehensive analysis directly linking periodic orbits with N-body integration outcomes with an extensive exploration of the planar circular restricted three-body problem (CRTBP) with an outer planet and inner asteroid near or inside of the 2:1 mean motion resonance. We run nearly 1000 focused simulations for the entire age of the Universe (14 Gyr) with initial conditions mapped to the phase space locations surrounding the unstable and stable periodic orbits for that commensurability. In none of our simulations did the planar CRTBP architecture yield a long-time-scale (≳0.25 per cent of the age of the Universe) asteroid-star collision. The pericentre distance of asteroids which survived beyond this time-scale (≈35 Myr) varied by at most about 60 per cent. These results help affirm that collisions occur too quickly to explain WD pollution in the planar CRTBP 2:1 regime, and highlight the need for further periodic orbit studies with the eccentric and inclined TBP architectures and other significant orbital period commensurabilities.

  3. Measuring Behavioral Learnings: A Study in Consumer Credits.

    ERIC Educational Resources Information Center

    Anderson, C. Raymond

    A social simulation game, Consumer, was used to study the effectiveness of simulation in teaching facts about: (1) installment buying; (2) how to compare available sources of credit; and (3) how to recognize the best credit contract. The entire twelfth grade at one high school participated in the study. Ten class sections were assigned to…

  4. Revised Planning Methodology For Signalized Intersections And Operational Analysis Of Exclusive Left-Turn Lanes, A Simulation-Based Method, Part - I: Literature Review (Final Report)

    DOT National Transportation Integrated Search

    1996-04-01

    THE STUDY INVESTIGATES THE APPLICATION OF SIMULATION ALONG WITH FIELD OBSERVATIONS FOR ESTIMATION OF EXCLUSIVE LEFT-TURN SATURATION FLOW RATE AND CAPACITY. THE ENTIRE RESEARCH HAS COVERED THE FOLLOWING PRINCIPAL SUBJECTS: (1) A SATURATION FLOW MODEL ...

  5. Exploring Replica-Exchange Wang-Landau sampling in higher-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentim, Alexandra; Rocha, Julio C. S.; Tsai, Shan-Ho

    We considered a higher-dimensional extension for the replica-exchange Wang-Landau algorithm to perform a random walk in the energy and magnetization space of the two-dimensional Ising model. This hybrid scheme combines the advantages of Wang-Landau and Replica-Exchange algorithms, and the one-dimensional version of this approach has been shown to be very efficient and to scale well, up to several thousands of computing cores. This approach allows us to split the parameter space of the system to be simulated into several pieces and still perform a random walk over the entire parameter range, ensuring the ergodicity of the simulation. Previous work, inmore » which a similar scheme of parallel simulation was implemented without using replica exchange and with a different way to combine the result from the pieces, led to discontinuities in the final density of states over the entire range of parameters. From our simulations, it appears that the replica-exchange Wang-Landau algorithm is able to overcome this diculty, allowing exploration of higher parameter phase space by keeping track of the joint density of states.« less

  6. Towards Flange-to-Flange Turbopump Simulations for Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Williams, Robert

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbopump geometry through numerical simulation will be of significant value toward design. This will help to improve safety of future space missions. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the MPI and MLP versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology. INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbopump simulations, moving boundary capability and an efficient time-accurate integration methods are build in the flow solver. To handle the geometric complexity and moving boundary problems, overset grid scheme is incorporated with the solver that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. The performance of the two time integration schemes for time-accurate computations is investigated. For an unsteady flow which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive. The current geometry for the LOX boost turbopump has various rotating and stationary components, such as inducer, stators, kicker, hydrolic turbine, where the flow is extremely unsteady. Figure 1 shows the geometry and computed surface pressure of the inducer. The inducer and the hydrolic turbine rotate in different rotational speed.

  7. Wounding effects of the AK-47 rifle used by Patrick Purdy in the Stockton, California, schoolyard shooting of January 17, 1989.

    PubMed

    Fackler, M L; Malinowski, J A; Hoxie, S W; Jason, A

    1990-09-01

    The limited disruption produced in tissue simulant by the rifle and bullets used in the Stockton, California, schoolyard shooting is entirely consistent with the autopsy reports on the five children who died of their wounds. It is also entirely consistent with well-documented battlefield studies and with previous tissue-simulant studies from many laboratories. It is inconsistent with many exaggerated accounts of assault-rifle wounding effects described by the media in the aftermath of this incident. This information should be documented for the historical record. However, the critical reason for correcting the misconceptions produced by media reaction to this incident is to prevent inappropriate gunshot-wound treatment.

  8. Stochastic modelling of microstructure formation in solidification processes

    NASA Astrophysics Data System (ADS)

    Nastac, Laurentiu; Stefanescu, Doru M.

    1997-07-01

    To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'

  9. SWMF Global Magnetosphere Simulations of January 2005: Geomagnetic Indices and Cross-Polar Cap Potential

    DOE PAGES

    Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.; ...

    2017-10-30

    We simulated the entire month of January, 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model, and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, Sym-H, AL, and cross polar cap potential (CPCP). We find that the model does an excellent job of predicting the Sym-H index, with an RMSE of 17-18 nT. Kp is predicted well during storm-time conditions, but over-predicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonablymore » well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to over-predict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resoution, with the exception of the rate of occurrence for strongly negative AL values. As a result, the use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.« less

  10. SWMF Global Magnetosphere Simulations of January 2005: Geomagnetic Indices and Cross-Polar Cap Potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.

    We simulated the entire month of January, 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model, and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, Sym-H, AL, and cross polar cap potential (CPCP). We find that the model does an excellent job of predicting the Sym-H index, with an RMSE of 17-18 nT. Kp is predicted well during storm-time conditions, but over-predicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonablymore » well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to over-predict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resoution, with the exception of the rate of occurrence for strongly negative AL values. As a result, the use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.« less

  11. Space time modelling of air quality for environmental-risk maps: A case study in South Portugal

    NASA Astrophysics Data System (ADS)

    Soares, Amilcar; Pereira, Maria J.

    2007-10-01

    Since the 1960s, there has been a strong industrial development in the Sines area, on the southern Atlantic coast of Portugal, including the construction of an important industrial harbour and of, mainly, petrochemical and energy-related industries. These industries are, nowadays, responsible for substantial emissions of SO2, NOx, particles, VOCs and part of the ozone polluting the atmosphere. The major industries are spatially concentrated in a restricted area, very close to populated areas and natural resources such as those protected by the European Natura 2000 network. Air quality parameters are measured at the emissions' sources and at a few monitoring stations. Although air quality parameters are measured on an hourly basis, the lack of representativeness in space of these non-homogeneous phenomena makes even their representativeness in time questionable. Hence, in this study, the regional spatial dispersion of contaminants is also evaluated, using diffusive-sampler (Radiello Passive Sampler) campaigns during given periods. Diffusive samplers cover the entire space extensively, but just for a limited period of time. In the first step of this study, a space-time model of pollutants was built, based on a stochastic simulation-direct sequential simulation-with local spatial trend. The spatial dispersion of the contaminants for a given period of time-corresponding to the exposure time of the diffusive samplers-was computed by ordinary kriging. Direct sequential simulation was applied to produce equiprobable spatial maps for each day of that period, using the kriged map as a spatial trend and the daily measurements of pollutants from the monitoring stations as hard data. In the second step, the following environmental risk and costs maps were computed from the set of simulated realizations of pollutants: (i) maps of the contribution of each emission to the pollutant concentration at any spatial location; (ii) costs of badly located monitoring stations.

  12. Hydrodynamic simulations of accretion flows with time-varying viscosity

    NASA Astrophysics Data System (ADS)

    Roy, Abhishek; Chakrabarti, Sandip K.

    2017-12-01

    X-ray outbursts of stellar-mass black hole candidates are believed to be due to a sudden rise in viscosity, which transports angular momentum efficiently and increases the accretion rates, causing higher X-ray flux. After the viscosity is reduced, the outburst subsides and the object returns back to the pre-outburst quiescence stage. In the absence of a satisfactory understanding of the physical mechanism leading to such a sharp time dependence of viscous processes, we perform numerical simulations where we include the rise and fall of a viscosity parameter at an outer injection grid, assumed to be located at the accumulation radius where matter from the companion is piled up before being released by enhanced viscosity. We use a power-law radial dependence of the viscosity parameter (α ∼ rε), but the exponent (ε) is allowed to vary with time to mimic a fast rise and decay of the viscosity parameter. Since X-ray spectra of a black hole candidate can be explained by a Keplerian disc component in the presence of a post-shock region of an advective flow, our goal here is also to understand whether the flow configurations required to explain the spectral states of an outbursting source could be obtained by a time-varying viscosity. We present the results of our simulations to prove that low-angular-momentum (sub-Keplerian) advective flows do form a Keplerian disc in the pre-shock region when the viscosity is enhanced, which disappears on a much longer time-scale after the viscosity is withdrawn. From the variation of the Keplerian disc inside an advective halo, we believe that our result, for the first time, is able to simulate the two-component advective flow dynamics during an entire X-ray outburst and explain the observed hysteresis effects in the hardness-intensity diagram.

  13. Effective conductivity of suspensions of hard spheres by Brownian motion simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan Kim, I.; Torquato, S.

    1991-02-15

    A generalized Brownian motion simulation technique developed by Kim and Torquato (J. Appl. Phys. {bold 68}, 3892 (1990)) is applied to compute exactly'' the effective conductivity {sigma}{sub {ital e}} of heterogeneous media composed of regular and random distributions of hard spheres of conductivity {sigma}{sub 2} in a matrix of conductivity {sigma}{sub 1} for virtually the entire volume fraction range and for several values of the conductivity ratio {alpha}={sigma}{sub 2}/{sigma}{sub 1}, including superconducting spheres ({alpha}={infinity}) and perfectly insulating spheres ({alpha}=0). A key feature of the procedure is the use of {ital first}-{ital passage}-{ital time} equations in the two homogeneous phases andmore » at the two-phase interface. The method is shown to yield {sigma}{sub {ital e}} accurately with a comparatively fast execution time. The microstructure-sensitive analytical approximation of {sigma}{sub {ital e}} for dispersions derived by Torquato (J. Appl. Phys. {bold 58}, 3790 (1985)) is shown to be in excellent agreement with our data for random suspensions for the wide range of conditions reported here.« less

  14. Microscopic mechanism of nanocrystal formation from solution by cluster aggregation and coalescence

    PubMed Central

    Hassan, Sergio A.

    2011-01-01

    Solute-cluster aggregation and particle fusion have recently been suggested as alternative routes to the classical mechanism of nucleation from solution. The role of both processes in the crystallization of an aqueous electrolyte under controlled salt addition is here elucidated by molecular dynamics simulation. The time scale of the simulation allows direct observation of the entire crystallization pathway, from early events in the prenucleation stage to the formation of a nanocrystal in equilibrium with concentrated solution. The precursor originates in a small amorphous aggregate stabilized by hydration forces. The core of the nucleus becomes crystalline over time and grows by coalescence of the amorphous phase deposited at the surface. Imperfections of ion packing during coalescence promote growth of two conjoint crystallites. A parameter of order and calculated cohesive energies reflect the increasing crystalline order and stress relief at the grain boundary. Cluster aggregation plays a major role both in the formation of the nucleus and in the early stages of postnucleation growth. The mechanism identified shares common features with nucleation of solids from the melt and of liquid droplets from the vapor. PMID:21428633

  15. Simulation of Glacial Cycles Before and After the Mid-Pleistocene Transition

    NASA Astrophysics Data System (ADS)

    Ganopolski, A.; Willeit, M.; Calov, R.

    2017-12-01

    In spite of significant progress achieved in understanding of glacial cycles, the cause of Mid-Pleistocene transition (MPT) is still not fully understood. To study possible mechanisms of the MPT we used the Earth system model of intermediate complexity CLIMBER-2 which incorporates all major components of the Earth system - atmosphere, ocean, land surface, northern hemisphere ice sheets, terrestrial biota and soil carbon, aeolian dust and marine biogeochemistry. We run the model through the entire Quaternary. The only prescribed forcing in these simulations is variations in Earth orbital parameters. In addition we prescribed gradually evolving in time terrestrial sediment cover and global volcanic outgassing. We found that gradual removal of terrestrial sediment from the Northern Hemisphere continent by glacial processes is sufficient to explain transition from 40-ka to 100-ka worlds around 1 million years ago. By starting the model at different times and using the same initial conditions we found that modeling results converge to the same solution which depends only on the orbital forcing and lower boundary conditions. Our results thus strongly suggest that Quaternary glacial cycles are externally forced and nearly deterministic.

  16. Non-adiabatic dynamics of isolated green fluorescent protein chromophore anion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Li, E-mail: zhaoli282@dicp.ac.cn, E-mail: pwzhou@dicp.ac.cn, E-mail: libinsnet@dicp.ac.cn, E-mail: aihuagao@dicp.ac.cn; Gao, Ai-Hua, E-mail: zhaoli282@dicp.ac.cn, E-mail: pwzhou@dicp.ac.cn, E-mail: libinsnet@dicp.ac.cn, E-mail: aihuagao@dicp.ac.cn; University of the Chinese Academy of Sciences, Beijing 100049

    2014-12-21

    On-the-fly ab initio molecular dynamics calculations have been performed to investigate the relaxation mechanism of green fluorescent protein chromophore anion under vacuum. The CASSCF surface hopping simulation method based on Zhu-Nakamura theory is applied to present the real-time conformational changes of the target molecule. The static calculations and dynamics simulation results suggest that not only the twisting motion around bridging bonds between imidazolinone and phenoxy groups but the strength mode of C=O and pyramidalization character of bridging atom are major factors on the ultrafast fluorescence quenching process of the isolated chromophore anion. The abovementioned factors bring the molecule to themore » vicinity of conical intersections on its potential energy surface and to finish the internal conversion process. A Hula-like twisting pattern is displayed during the relaxation process and the entire decay process disfavors a photoswitching pattern which corresponds to cis-trans photoisomerization.« less

  17. Convergance experiments with a hydrodynamic model of Port Royal Sound, South Carolina

    USGS Publications Warehouse

    Lee, J.K.; Schaffranek, R.W.; Baltzer, R.A.

    1989-01-01

    A two-demensional, depth-averaged, finite-difference, flow/transport model, SIM2D, is being used to simulate tidal circulation and transport in the Port Royal Sound, South Carolina, estuarine system. Models of a subregion of the Port Royal Sound system have been derived from an earlier-developed model of the entire system having a grid size of 600 ft. The submodels were implemented with grid sizes of 600, 300, and 150 ft in order to determine the effects of changes in grid size on computed flows in the subregion, which is characterized by narrow channels and extensive tidal flats that flood and dewater with each rise and fall of the tide. Tidal amplitudes changes less than 5 percent as the grid size was decreased. Simulations were performed with the 300-foot submodel for time steps of 60, 30, and 15 s. Study results are discussed.

  18. Galactic wind shells and high redshift radio galaxies. On the nature of associated absorbers

    NASA Astrophysics Data System (ADS)

    Krause, M.

    2005-06-01

    A jet is simulated on the background of a galactic wind headed by a radiative bow shock. The wind shell, which is due to the radiative bow shock, is effectively destroyed by the impact of the jet cocoon, thanks to Rayleigh-Taylor instabilities. Associated strong HI absorption, and possibly also molecular emission, in high redshift radio galaxies which is observed preferentially in the smaller ones may be explained by that model, which is an improvement of an earlier radiative bow shock model. The model requires temperatures of ≈106 K in the proto-clusters hosting these objects, and may be tested by high resolution spectroscopy of the Lyα line. The simulations show that - before destruction - the jet cocoon fills the wind shell entirely for a considerable time with intact absorption system. Therefore, radio imaging of sources smaller than the critical size should reveal the round central bubbles, if the model is correct.

  19. Parallel programming with Easy Java Simulations

    NASA Astrophysics Data System (ADS)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  20. Quantifying equation-of-state and opacity errors using integrated supersonic diffusive radiation flow experiments on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guymer, T. M., E-mail: Thomas.Guymer@awe.co.uk; Moore, A. S.; Morton, J.

    A well diagnosed campaign of supersonic, diffusive radiation flow experiments has been fielded on the National Ignition Facility. These experiments have used the accurate measurements of delivered laser energy and foam density to enable an investigation into SESAME's tabulated equation-of-state values and CASSANDRA's predicted opacity values for the low-density C{sub 8}H{sub 7}Cl foam used throughout the campaign. We report that the results from initial simulations under-predicted the arrival time of the radiation wave through the foam by ≈22%. A simulation study was conducted that artificially scaled the equation-of-state and opacity with the intended aim of quantifying the systematic offsets inmore » both CASSANDRA and SESAME. Two separate hypotheses which describe these errors have been tested using the entire ensemble of data, with one being supported by these data.« less

  1. Comparison of Four Precipitation Forcing Datasets in Land Information System Simulations over the Continental U.S.

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Kumar, Sujay V.; Kuligowski, Robert J.; Langston, Carrie

    2013-01-01

    The NASA Short ]term Prediction Research and Transition (SPoRT) Center in Huntsville, AL is running a real ]time configuration of the NASA Land Information System (LIS) with the Noah land surface model (LSM). Output from the SPoRT ]LIS run is used to initialize land surface variables for local modeling applications at select National Weather Service (NWS) partner offices, and can be displayed in decision support systems for situational awareness and drought monitoring. The SPoRT ]LIS is run over a domain covering the southern and eastern United States, fully nested within the National Centers for Environmental Prediction Stage IV precipitation analysis grid, which provides precipitation forcing to the offline LIS ]Noah runs. The SPoRT Center seeks to expand the real ]time LIS domain to the entire Continental U.S. (CONUS); however, geographical limitations with the Stage IV analysis product have inhibited this expansion. Therefore, a goal of this study is to test alternative precipitation forcing datasets that can enable the LIS expansion by improving upon the current geographical limitations of the Stage IV product. The four precipitation forcing datasets that are inter ]compared on a 4 ]km resolution CONUS domain include the Stage IV, an experimental GOES quantitative precipitation estimate (QPE) from NESDIS/STAR, the National Mosaic and QPE (NMQ) product from the National Severe Storms Laboratory, and the North American Land Data Assimilation System phase 2 (NLDAS ]2) analyses. The NLDAS ]2 dataset is used as the control run, with each of the other three datasets considered experimental runs compared against the control. The regional strengths, weaknesses, and biases of each precipitation analysis are identified relative to the NLDAS ]2 control in terms of accumulated precipitation pattern and amount, and the impacts on the subsequent LSM spin ]up simulations. The ultimate goal is to identify an alternative precipitation forcing dataset that can best support an expansion of the real ]time SPoRT ]LIS to a domain covering the entire CONUS.

  2. The role of nontechnical skills in simulated trauma resuscitation.

    PubMed

    Briggs, Alexandra; Raja, Ali S; Joyce, Maurice F; Yule, Steven J; Jiang, Wei; Lipsitz, Stuart R; Havens, Joaquim M

    2015-01-01

    Trauma team training provides instruction on crisis management through debriefing and discussion of teamwork and leadership skills during simulated trauma scenarios. The effects of team leader's nontechnical skills (NTSs) on technical performance have not been thoroughly studied. We hypothesized that team's and team leader's NTSs correlate with technical performance of clinical tasks. Retrospective cohort study. Brigham and Women's Hospital, STRATUS Center for Surgical Simulation A total of 20 teams composed of surgical residents, emergency medicine residents, emergency department nurses, and emergency services assistants underwent 2 separate, high-fidelity, simulated trauma scenarios. Each trauma scenario was recorded on video for analysis and divided into 4 consecutive sections. For each section, 2 raters used the Non-Technical Skills for Surgeons framework to assess NTSs of the team. To evaluate the entire team's NTS, 2 additional raters used the Modified Non-Technical Skills Scale for Trauma system. Clinical performance measures including adherence to guidelines and time to perform critical tasks were measured independently. NTSs performance by both teams and team leaders in all NTS categories decreased from the beginning to the end of the scenario (all p < 0.05). There was significant correlation between team's and team leader's cognitive skills and critical task performance, with correlation coefficients between 0.351 and 0.478 (p < 0.05). The NTS performance of the team leader highly correlated with that of the entire team, with correlation coefficients between 0.602 and 0.785 (p < 0.001). The NTSs of trauma teams and team leaders deteriorate as clinical scenarios progress, and the performance of team leaders and teams is highly correlated. Cognitive NTS scores correlate with critical task performance. Increased attention to NTSs during trauma team training may lead to sustained performance throughout trauma scenarios. Decision making and situation awareness skills are critical for both team leaders and teams and should be specifically addressed to improve performance. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  3. Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation

    DTIC Science & Technology

    2013-06-01

    exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in

  4. Design and multifidelity analysis of dual mode scramjet compression system using coupled NPSS and fluent simulation

    NASA Astrophysics Data System (ADS)

    Vijayakumar, Nandakumar

    Hypersonic airbreathing engines mark a potential future development of the aerospace industry and immense efforts have been taken in gaining knowledge in them for the past decades. The physical phenomenon occurring at the hypersonic flow regime makes the design and performance prediction of a scramjet engine hard. Though cutting-edge simulation tools fight their way toward accurate prediction of the environment, the time consumed by the entire process in designing and analyzing a scramjet engine and its component may be exorbitant. A multi-fidelity approach for designing a scramjet with a cruising Mach number of 6 is detailed in this research where high-order simulations are applied according to the physics involved in the component. Two state-of-the-art simulation tools were used to take the aerodynamic and propulsion disciplines into account for realistic prediction of the individual components as well as the entire scramjet. The specific goal of this research is to create a virtual environment to design and analyze a hypersonic, two-dimensional, planar inlet and isolator to check its operability for a dual-mode scramjet engine. The dual mode scramjet engine starts at a Mach number of 3.5 where it operates as a ramjet and accelerates to Mach 6 to be operated as a scramjet engine. The intercomponent interaction between the compression components with the rest of the engine is studied by varying the fidelity of the numerical simulation according to the complexity of the situation. Efforts have been taken to track the transition Mach number as it switches from ramjet to scramjet. A complete scramjet assembly was built using the Numerical Propulsion Simulation System (NPSS) and the performance of the engine was evaluated for various scenarios. Different numerical techniques were opted for varying the fidelity of the analysis with the highest fidelity consisting of 2D RANS CFD simulation. The interaction between the NPSS elements with the CFD solver is governed by the top-level assembly solver of NPSS. The importance of intercomponent interactions are discussed. The methodology used in this research for design and analysis, should add up to provide an efficient way for estimating the design and off-design operating modes of a dual mode scramjet engine.

  5. Combining neural networks and signed particles to simulate quantum systems more efficiently

    NASA Astrophysics Data System (ADS)

    Sellier, Jean Michel

    2018-04-01

    Recently a new formulation of quantum mechanics has been suggested which describes systems by means of ensembles of classical particles provided with a sign. This novel approach mainly consists of two steps: the computation of the Wigner kernel, a multi-dimensional function describing the effects of the potential over the system, and the field-less evolution of the particles which eventually create new signed particles in the process. Although this method has proved to be extremely advantageous in terms of computational resources - as a matter of fact it is able to simulate in a time-dependent fashion many-body systems on relatively small machines - the Wigner kernel can represent the bottleneck of simulations of certain systems. Moreover, storing the kernel can be another issue as the amount of memory needed is cursed by the dimensionality of the system. In this work, we introduce a new technique which drastically reduces the computation time and memory requirement to simulate time-dependent quantum systems which is based on the use of an appropriately tailored neural network combined with the signed particle formalism. In particular, the suggested neural network is able to compute efficiently and reliably the Wigner kernel without any training as its entire set of weights and biases is specified by analytical formulas. As a consequence, the amount of memory for quantum simulations radically drops since the kernel does not need to be stored anymore as it is now computed by the neural network itself, only on the cells of the (discretized) phase-space which are occupied by particles. As its is clearly shown in the final part of this paper, not only this novel approach drastically reduces the computational time, it also remains accurate. The author believes this work opens the way towards effective design of quantum devices, with incredible practical implications.

  6. Correlation between simulations and cavitation-induced erosion damage in Spallation Neutron Source target modules after operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riemer, Bernie; McClintock, David A; Kaminskas, Saulius

    2014-01-01

    An explicit finite element (FE) technique developed for estimating dynamic strain in the Spallation Neutron Source (SNS) mercury target module vessel is now providing insight into cavitation damage patterns observed in used targets. The technique uses an empirically developed material model for the mercury that describes liquid-like volumetric stiffness combined with a tensile pressure cut-off limit that approximates cavitation. The longest period each point in the mercury is at the tensile cut-off threshold is denoted its saturation time. Now, the pattern of saturation time can be obtained from these simulations and is being positively correlated with observed damage patterns andmore » is interpreted as a qualitative measure of damage potential. Saturation time has been advocated by collaborators at J-Parc as a factor in predicting bubble nuclei growth and collapse intensity. The larger the ratio of maximum bubble size to nucleus, the greater the bubble collapse intensity to be expected; longer saturation times result in greater ratios. With the recent development of a user subroutine for the FE solver saturation time is now provided over the entire mercury domain. Its pattern agrees with spots of damage seen above and below the beam axis on the SNS inner vessel beam window and elsewhere. The other simulation result being compared to observed damage patterns is mercury velocity at the wall. Related R&D has provided evidence for the damage mitigation that higher wall velocity provides. In comparison to observations in SNS targets, inverse correlation of high velocity to damage is seen. In effect, it is the combination of the patterns of saturation time and low velocity that seems to match actual damage patterns.« less

  7. Simulating the Birth of Massive Star Clusters: Is Destruction Inevitable?

    NASA Astrophysics Data System (ADS)

    Rosen, Anna

    2013-10-01

    Very early in its operation, the Hubble Space Telescope {HST} opened an entirely new frontier: study of the demographics and properties of star clusters far beyond the Milky Way. However, interpretation of HST's observations has proven difficult, and has led to the development of two conflicting models. One view is that most massive star clusters are disrupted during their infancy by feedback from newly formed stars {i.e., "infant mortality"}, independent of cluster mass or environment. The other model is that most star clusters survive their infancy and are disrupted later by mass-dependent dynamical processes. Since observations at present have failed to discriminate between these views, we propose a theoretical investigation to provide new insight. We will perform radiation-hydrodynamic simulations of the formation of massive star clusters, including for the first time a realistic treatment of the most important stellar feedback processes. These simulations will elucidate the physics of stellar feedback, and allow us to determine whether cluster disruption is mass-dependent or -independent. We will also use our simulations to search for observational diagnostics that can distinguish bound from unbound clusters, and to predict how cluster disruption affects the cluster luminosity function in a variety of galactic environments.

  8. Numerical simulation of ozone concentration profile and flow characteristics in paddy bulks.

    PubMed

    Pandiselvam, Ravi; Chandrasekar, Veerapandian; Thirupathi, Venkatachalam

    2017-08-01

    Ozone has shown the potential to control stored product insect pests. The high reactivity of ozone leads to special problems when it passes though an organic medium such as stored grains. Thus, there is a need for a simulation study to understand the concentration profile and flow characteristics of ozone in stored paddy bulks as a function of time. Simulation of ozone concentration through the paddy grain bulks was explained by applying the principle of the law of conservation along with a continuity equation. A higher ozone concentration value was observed at regions near the ozone diffuser whereas a lower concentration value was observed at regions away from the ozone diffuser. The relative error between the experimental and predicted ozone concentration values for the entire bin geometry was less than 42.8%. The simulation model described a non-linear change of ozone concentration in stored paddy bulks. Results of this study provide a valuable source for estimating the parameters needed for effectively designing a storage bin for fumigation of paddy grains in a commercial scale continuous-flow ozone fumigation system. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  9. Assessing Climate Change Impacts on Wildfire Exposure in Mediterranean Areas.

    PubMed

    Lozano, Olga M; Salis, Michele; Ager, Alan A; Arca, Bachisio; Alcasena, Fermin J; Monteiro, Antonio T; Finney, Mark A; Del Giudice, Liliana; Scoccimarro, Enrico; Spano, Donatella

    2017-10-01

    We used simulation modeling to assess potential climate change impacts on wildfire exposure in Italy and Corsica (France). Weather data were obtained from a regional climate model for the period 1981-2070 using the IPCC A1B emissions scenario. Wildfire simulations were performed with the minimum travel time fire spread algorithm using predicted fuel moisture, wind speed, and wind direction to simulate expected changes in weather for three climatic periods (1981-2010, 2011-2040, and 2041-2070). Overall, the wildfire simulations showed very slight changes in flame length, while other outputs such as burn probability and fire size increased significantly in the second future period (2041-2070), especially in the southern portion of the study area. The projected changes fuel moisture could result in a lengthening of the fire season for the entire study area. This work represents the first application in Europe of a methodology based on high resolution (250 m) landscape wildfire modeling to assess potential impacts of climate changes on wildfire exposure at a national scale. The findings can provide information and support in wildfire management planning and fire risk mitigation activities. © 2016 Society for Risk Analysis.

  10. A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.

    PubMed

    Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf

    2014-10-27

    We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.

  11. Potential Predictability of U.S. Summer Climate with "Perfect" Soil Moisture

    NASA Technical Reports Server (NTRS)

    Yang, Fanglin; Kumar, Arun; Lau, K.-M.

    2004-01-01

    The potential predictability of surface-air temperature and precipitation over the United States continent was assessed for a GCM forced by observed sea surface temperatures and an estimate of observed ground soil moisture contents. The latter was obtained by substituting the GCM simulated precipitation, which is used to drive the GCM's land-surface component, with observed pentad-mean precipitation at each time step of the model's integration. With this substitution, the simulated soil moisture correlates well with an independent estimate of observed soil moisture in all seasons over the entire US continent. Significant enhancements on the predictability of surface-air temperature and precipitation were found in boreal late spring and summer over the US continent. Anomalous pattern correlations of precipitation and surface-air temperature over the US continent in the June-July-August season averaged for the 1979-2000 period increased from 0.01 and 0.06 for the GCM simulations without precipitation substitution to 0.23 and 0.3 1, respectively, for the simulations with precipitation substitution. Results provide an estimate for the limits of potential predictability if soil moisture variability is to be perfectly predicted. However, this estimate may be model dependent, and needs to be substantiated by other modeling groups.

  12. Thermodynamic interpretation of reactive processes in Ni-Al nanolayers from atomistic simulations

    NASA Astrophysics Data System (ADS)

    Sandoval, Luis; Campbell, Geoffrey H.; Marian, Jaime

    2014-03-01

    Metals that can form intermetallic compounds by exothermic reactions constitute a class of reactive materials with multiple applications. Ni-Al laminates of thin alternating layers are being considered as model nanometric metallic multilayers for studying various reaction processes. However, the reaction kinetics at short timescales after mixing are not entirely understood. In this work, we calculate the free energies of Ni-Al alloys as a function of composition and temperature for different solid phases using thermodynamic integration based on state-of-the-art interatomic potentials. We use this information to interpret molecular dynamics (MD) simulations of bilayer systems at 800 K and zero pressure, both in isothermal and isenthalpic conditions. We find that a disordered phase always forms upon mixing as a precursor to a more stable nano crystalline B2 phase. We construe the reactions observed in terms of thermodynamic trajectories governed by the state variables computed. Simulated times of up to 30 ns were achieved, which provides a window to phenomena not previously observed in MD simulations. Our results provide insight into the early experimental reaction timescales and suggest that the path (segregated reactants) → (disordered phase) → (B2 structure) is always realized irrespective of the imposed boundary conditions.

  13. Juno Mission Simulation

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Weidner, Richard J.

    2008-01-01

    The Juno spacecraft is planned to launch in August of 2012 and would arrive at Jupiter four years later. The spacecraft would spend more than one year orbiting the planet and investigating the existence of an ice-rock core; determining the amount of global water and ammonia present in the atmosphere, studying convection and deep- wind profiles in the atmosphere; investigating the origin of the Jovian magnetic field, and exploring the polar magnetosphere. Juno mission management is responsible for mission and navigation design, mission operation planning, and ground-data-system development. In order to ensure successful mission management from initial checkout to final de-orbit, it is critical to share a common vision of the entire mission operation phases with the rest of the project teams. Two major challenges are 1) how to develop a shared vision that can be appreciated by all of the project teams of diverse disciplines and expertise, and 2) how to continuously evolve a shared vision as the project lifecycle progresses from formulation phase to operation phase. The Juno mission simulation team addresses these challenges by developing agile and progressive mission models, operation simulations, and real-time visualization products. This paper presents mission simulation visualization network (MSVN) technology that has enabled a comprehensive mission simulation suite (MSVN-Juno) for the Juno project.

  14. Time-Lapse Video of SLS Engine Section Test Article Being Stacked at Michoud

    NASA Image and Video Library

    2017-04-25

    This time-lapse video shows the Space Launch System engine section structural qualification test article being stacked at NASA's Michoud Assembly Facility in New Orleans. The rocket's engine section is the bottom of the core stage and houses the four RS-25 engines. The engine section test article was moved to Michoud's Cell A in Building 110 for vertical stacking with hardware that simulates the rocket's liquid hydrogen tank, which is the fuel tank that joins to the engine section. Once stacked, the entire test article will load onto the barge Pegasus and ship to NASA's Marshall Space Flight Center in Huntsville, Alabama. There, it will be subjected to millions of pounds of force during testing to ensure the hardware can withstand the incredible stresses of launch.

  15. Digitally-bypassed transducers: interfacing digital mockups to real-time medical equipment.

    PubMed

    Sirowy, Scott; Givargis, Tony; Vahid, Frank

    2009-01-01

    Medical device software is sometimes initially developed by using a PC simulation environment that executes models of both the device and a physiological system, and then later by connecting the actual medical device to a physical mockup of the physiological system. An alternative is to connect the medical device to a digital mockup of the physiological system, such that the device believes it is interacting with a physiological system, but in fact all interaction is entirely digital. Developing medical device software by interfacing with a digital mockup enables development without costly or dangerous physical mockups, and enables execution that is faster or slower than real time. We introduce digitally-bypassed transducers, which involve a small amount of hardware and software additions, and which enable interfacing with digital mockups.

  16. LUXSim: A component-centric approach to low-background simulations

    DOE PAGES

    Akerib, D. S.; Bai, X.; Bedikian, S.; ...

    2012-02-13

    Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less

  17. Modeling and Simulation of the Off-gas in an Electric Arc Furnace

    NASA Astrophysics Data System (ADS)

    Meier, Thomas; Gandt, Karima; Echterhof, Thomas; Pfeifer, Herbert

    2017-12-01

    The following paper describes an approach to process modeling and simulation of the gas phase in an electric arc furnace (EAF). The work presented represents the continuation of research by Logar, Dovžan, and Škrjanc on modeling the heat and mass transfer and the thermochemistry in an EAF. Due to the lack of off-gas measurements, Logar et al. modeled a simplified gas phase under consideration of five gas components and simplified chemical reactions. The off-gas is one of the main continuously measurable EAF process values and the off-gas flow represents a heat loss up to 30 pct of the entire EAF energy input. Therefore, gas phase modeling offers further development opportunities for future EAF optimization. This paper presents the enhancement of the previous EAF gas phase modeling by the consideration of additional gas components and a more detailed heat and mass transfer modeling. In order to avoid the increase of simulation time due to more complex modeling, the EAF model has been newly implemented to use an efficient numerical solver for ordinary differential equations. Compared to the original model, the chemical components H2, H2O, and CH4 are included in the gas phase and equilibrium reactions are implemented. The results show high levels of similarity between the measured operational data from an industrial scale EAF and the theoretical data from the simulation within a reasonable simulation time. In the future, the dynamic EAF model will be applicable for on- and offline optimizations, e.g., to analyze alternative input materials and mode of operations.

  18. Accurate electrical prediction of memory array through SEM-based edge-contour extraction using SPICE simulation

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya

    2009-03-01

    The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.

  19. Adaptive frequency-domain equalization for the transmission of the fundamental mode in a few-mode fiber.

    PubMed

    Bai, Neng; Xia, Cen; Li, Guifang

    2012-10-08

    We propose and experimentally demonstrate single-carrier adaptive frequency-domain equalization (SC-FDE) to mitigate multipath interference (MPI) for the transmission of the fundamental mode in a few-mode fiber. The FDE approach reduces computational complexity significantly compared to the time-domain equalization (TDE) approach while maintaining the same performance. Both FDE and TDE methods are evaluated by simulating long-haul fundamental-mode transmission using a few-mode fiber. For the fundamental mode operation, the required tap length of the equalizer depends on the differential mode group delay (DMGD) of a single span rather than DMGD of the entire link.

  20. Transient response in granular quasi-two-dimensional bounded heap flow.

    PubMed

    Xiao, Hongyi; Ottino, Julio M; Lueptow, Richard M; Umbanhowar, Paul B

    2017-10-01

    We study the transition between steady flows of noncohesive granular materials in quasi-two-dimensional bounded heaps by suddenly changing the feed rate. In both experiments and simulations, the primary feature of the transition is a wedge of flowing particles that propagates downstream over the rising free surface with a wedge front velocity inversely proportional to the square root of time. An additional longer duration transient process continues after the wedge front reaches the downstream wall. The entire transition is well modeled as a moving boundary problem with a diffusionlike equation derived from local mass balance and a local relation between the flux and the surface slope.

  1. Near-surface coherent structures explored by large eddy simulation of entire tropical cyclones.

    PubMed

    Ito, Junshi; Oizumi, Tsutao; Niino, Hiroshi

    2017-06-19

    Taking advantage of the huge computational power of a massive parallel supercomputer (K-supercomputer), this study conducts large eddy simulations of entire tropical cyclones by employing a numerical weather prediction model, and explores near-surface coherent structures. The maximum of the near-surface wind changes little from that simulated based on coarse-resolution runs. Three kinds of coherent structures appeared inside the boundary layer. The first is a Type-A roll, which is caused by an inflection-point instability of the radial flow and prevails outside the radius of maximum wind. The second is a Type-B roll that also appears to be caused by an inflection-point instability but of both radial and tangential winds. Its roll axis is almost orthogonal to the Type-A roll. The third is a Type-C roll, which occurs inside the radius of maximum wind and only near the surface. It transports horizontal momentum in an up-gradient sense and causes the largest gusts.

  2. Mars Smart Lander Parachute Simulation Model

    NASA Technical Reports Server (NTRS)

    Queen, Eric M.; Raiszadeh, Ben

    2002-01-01

    A multi-body flight simulation for the Mars Smart Lander has been developed that includes six degree-of-freedom rigid-body models for both the supersonically-deployed and subsonically-deployed parachutes. This simulation is designed to be incorporated into a larger simulation of the entire entry, descent and landing (EDL) sequence. The complete end-to-end simulation will provide attitude history predictions of all bodies throughout the flight as well as loads on each of the connecting lines. Other issues such as recontact with jettisoned elements (heat shield, back shield, parachute mortar covers, etc.), design of parachute and attachment points, and desirable line properties can also be addressed readily using this simulation.

  3. Simulation Training Versus Real Time Console Training for New Flight Controllers

    NASA Technical Reports Server (NTRS)

    Heaton, Amanda

    2010-01-01

    For new flight controllers, the two main learning tools are simulations and real time console performance training. These benefit the new flight controllers in different ways and could possibly be improved. Simulations: a) Allow for mistakes without serious consequences. b) Lets new flight controllers learn the working style of other new flight controllers. c) Lets new flight controllers eventually begin to feel like they have mastered the sim world, so therefore they must be competent in the real time world too. Real time: a) Shows new flight controllers some of the unique problems that develop and have to be accounted for when dealing with certain payloads or systems. b) Lets new flight controllers experience handovers - gathering information from the previous shift on what the room needs to be aware of and what still needs to be done. c) Gives new flight controllers confidence that they can succeed in the position they are training for when they can solve real anomalies. How Sims could be improved and more like real-time ops for the ISS Operations Controller position: a) Operations Change Requests to review. b) Fewer anomalies (but still more than real time for practice). c) Payload Planning Manager Handover sheet for the E-1 and E-3 reviews. d) Flight note in system with at least one comment to verify for the E-1 and E-3 reviews How the real time console performance training could be improved for the ISS Operations Controller position: a) Schedule the new flight controller to be on console for four days but with a different certified person each day. This will force them to be the source of knowledge about every OCR in progress, everything that has happened in those few days, and every activity on the timeline. Constellation program flight controllers will have to learn entirely from simulations, thereby losing some of the elements that they will need to have experience with for real time ops. It may help them to practice real time console performance training in the International Space Station or Space Shuttle to gather some general anomaly resolution and day-to-day task management skills.

  4. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    NASA Astrophysics Data System (ADS)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-09-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  5. Noise Simulations of the High-Lift Common Research Model

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.; Vatsa, Veer N.; O'Connell, Matthew D.; Duda, Benjamin; Fares, Ehab

    2017-01-01

    The PowerFLOW(TradeMark) code has been used to perform numerical simulations of the high-lift version of the Common Research Model (HL-CRM) that will be used for experimental testing of airframe noise. Time-averaged surface pressure results from PowerFLOW(TradeMark) are found to be in reasonable agreement with those from steady-state computations using FUN3D. Surface pressure fluctuations are highest around the slat break and nacelle/pylon region, and synthetic array beamforming results also indicate that this region is the dominant noise source on the model. The gap between the slat and pylon on the HL-CRM is not realistic for modern aircraft, and most nacelles include a chine that is absent in the baseline model. To account for those effects, additional simulations were completed with a chine and with the slat extended into the pylon. The case with the chine was nearly identical to the baseline, and the slat extension resulted in higher surface pressure fluctuations but slightly reduced radiated noise. The full-span slat geometry without the nacelle/pylon was also simulated and found to be around 10 dB quieter than the baseline over almost the entire frequency range. The current simulations are still considered preliminary as changes in the radiated acoustics are still being observed with grid refinement, and additional simulations with finer grids are planned.

  6. Monte Carlo simulation of chemistry following radiolysis with TOPAS-nBio

    NASA Astrophysics Data System (ADS)

    Ramos-Méndez, J.; Perl, J.; Schuemann, J.; McNamara, A.; Paganetti, H.; Faddegon, B.

    2018-05-01

    Simulation of water radiolysis and the subsequent chemistry provides important information on the effect of ionizing radiation on biological material. The Geant4 Monte Carlo toolkit has added chemical processes via the Geant4-DNA project. The TOPAS tool simplifies the modeling of complex radiotherapy applications with Geant4 without requiring advanced computational skills, extending the pool of users. Thus, a new extension to TOPAS, TOPAS-nBio, is under development to facilitate the configuration of track-structure simulations as well as water radiolysis simulations with Geant4-DNA for radiobiological studies. In this work, radiolysis simulations were implemented in TOPAS-nBio. Users may now easily add chemical species and their reactions, and set parameters including branching ratios, dissociation schemes, diffusion coefficients, and reaction rates. In addition, parameters for the chemical stage were re-evaluated and updated from those used by default in Geant4-DNA to improve the accuracy of chemical yields. Simulation results of time-dependent and LET-dependent primary yields Gx (chemical species per 100 eV deposited) produced at neutral pH and 25 °C by short track-segments of charged particles were compared to published measurements. The LET range was 0.05–230 keV µm‑1. The calculated Gx values for electrons satisfied the material balance equation within 0.3%, similar for protons albeit with long calculation time. A smaller geometry was used to speed up proton and alpha simulations, with an acceptable difference in the balance equation of 1.3%. Available experimental data of time-dependent G-values for agreed with simulated results within 7%  ±  8% over the entire time range; for over the full time range within 3%  ±  4% for H2O2 from 49%  ±  7% at earliest stages and 3%  ±  12% at saturation. For the LET-dependent Gx, the mean ratios to the experimental data were 1.11  ±  0.98, 1.21  ±  1.11, 1.05  ±  0.52, 1.23  ±  0.59 and 1.49  ±  0.63 (1 standard deviation) for , , H2, H2O2 and , respectively. In conclusion, radiolysis and subsequent chemistry with Geant4-DNA has been successfully incorporated in TOPAS-nBio. Results are in reasonable agreement with published measured and simulated data.

  7. Exploding Nitromethane in Silico, in Real Time.

    PubMed

    Fileti, Eudes Eterno; Chaban, Vitaly V; Prezhdo, Oleg V

    2014-10-02

    Nitromethane (NM) is widely applied in chemical technology as a solvent for extraction, cleaning, and chemical synthesis. NM was considered safe for a long time, until a railroad tanker car exploded in 1958. We investigate the detonation kinetics and explosion reaction mechanisms in a variety of systems consisting of NM, molecular oxygen, and water vapor. Reactive molecular dynamics allows us to simulate reactions in time-domain, as they occur in real life. High polarity of the NM molecule is shown to play a key role, driving the first exothermic step of the reaction. Rapid temperature and pressure growth stimulate the subsequent reaction steps. Oxygen is important for faster oxidation, whereas its optimal concentration is in agreement with the proposed reaction mechanism. Addition of water (50 mol %) inhibits detonation; however, water does not prevent detonation entirely. The reported results provide important insights for improving applications of NM and preserving the safety of industrial processes.

  8. Hardware-in-the-Loop Rendezvous Tests of a Novel Actuators Command Concept

    NASA Astrophysics Data System (ADS)

    Gomes dos Santos, Willer; Marconi Rocco, Evandro; Boge, Toralf; Benninghoff, Heike; Rems, Florian

    2016-12-01

    Integration, test and validation results, in a real-time environment, of a novel concept for spacecraft control are presented in this paper. The proposed method commands simultaneously a group of actuators optimizing a given set of objective functions based on a multiobjective optimization technique. Since close proximity maneuvers play an important role in orbital servicing missions, the entire GNC system has been integrated and tested at a hardware-in-the-loop (HIL) rendezvous and docking simulator known as European Proximity Operations Simulator (EPOS). During the test campaign at EPOS facility, a visual camera has been used to provide the necessary measurements for calculating the relative position with respect to the target satellite during closed-loop simulations. In addition, two different configurations of spacecraft control have been considered in this paper: a thruster reaction control system and a mixed actuators mode which includes thrusters, reaction wheels, and magnetic torqrods. At EPOS, results of HIL closed-loop tests have demonstrated that a safe and stable rendezvous approach can be achieved with the proposed GNC loop.

  9. Numerical magnetohydrodynamic simulations of expanding flux ropes: Influence of boundary driving

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tacke, Thomas; Dreher, Jürgen; Sydora, Richard D.

    2013-07-15

    The expansion dynamics of a magnetized, current-carrying plasma arch is studied by means of time-dependent ideal MHD simulations. Initial conditions model the setup used in recent laboratory experiments that in turn simulate coronal loops [J. Tenfelde et al., Phys. Plasmas 19, 072513 (2012); E. V. Stenson and P. M. Bellan, Plasma Phys. Controlled Fusion 54, 124017 (2012)]. Boundary conditions of the electric field at the “lower” boundary, intersected by the arch, are chosen such that poloidal magnetic flux is injected into the domain, either localized at the arch footpoints themselves or halfway between them. These conditions are motivated by themore » tangential electric field expected to exist in the laboratory experiments due to the external circuit that drives the plasma current. The boundary driving is found to systematically enhance the expansion velocity of the plasma arch. While perturbations at the arch footpoints also deform its legs and create characteristic elongated segments, a perturbation between the footpoints tends to push the entire structure upwards, retaining an ellipsoidal shape.« less

  10. Modelling the impact of climate change and atmospheric N deposition on French forests biodiversity.

    PubMed

    Rizzetto, Simon; Belyazid, Salim; Gégout, Jean-Claude; Nicolas, Manuel; Alard, Didier; Corcket, Emmanuel; Gaudio, Noémie; Sverdrup, Harald; Probst, Anne

    2016-06-01

    A dynamic coupled biogeochemical-ecological model was used to simulate the effects of nitrogen deposition and climate change on plant communities at three forest sites in France. The three sites had different forest covers (sessile oak, Norway spruce and silver fir), three nitrogen loads ranging from relatively low to high, different climatic regions and different soil types. Both the availability of vegetation time series and the environmental niches of the understory species allowed to evaluate the model for predicting the composition of the three plant communities. The calibration of the environmental niches was successful, with a model performance consistently reasonably high throughout the three sites. The model simulations of two climatic and two deposition scenarios showed that climate change may entirely compromise the eventual recovery from eutrophication of the simulated plant communities in response to the reductions in nitrogen deposition. The interplay between climate and deposition was strongly governed by site characteristics and histories in the long term, while forest management remained the main driver of change in the short term. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Performance of the Satellite Test Assistant Robot in JPL's Space Simulation Facility

    NASA Technical Reports Server (NTRS)

    Mcaffee, Douglas; Long, Mark; Johnson, Ken; Siebes, Georg

    1995-01-01

    An innovative new telerobotic inspection system called STAR (the Satellite Test Assistant Robot) has been developed to assist engineers as they test new spacecraft designs in simulated space environments. STAR operates inside the ultra-cold, high-vacuum, test chambers and provides engineers seated at a remote Operator Control Station (OCS) with high resolution video and infrared (IR) images of the flight articles under test. STAR was successfully proof tested in JPL's 25-ft (7.6-m) Space Simulation Chamber where temperatures ranged from +85 C to -190 C and vacuum levels reached 5.1 x 10(exp -6) torr. STAR's IR Camera was used to thermally map the entire interior of the chamber for the first time. STAR also made several unexpected and important discoveries about the thermal processes occurring within the chamber. Using a calibrated test fixture arrayed with ten sample spacecraft materials, the IR camera was shown to produce highly accurate surface temperature data. This paper outlines STAR's design and reports on significant results from the thermal vacuum chamber test.

  12. Global ice sheet/RSL simulations using the higher-order Ice Sheet System Model.

    NASA Astrophysics Data System (ADS)

    Larour, E. Y.; Ivins, E. R.; Adhikari, S.; Schlegel, N.; Seroussi, H. L.; Morlighem, M.

    2017-12-01

    Relative sea-level rise is driven by processes that are intimately linked to the evolution ofglacial areas and ice sheets in particular. So far, most Earth System models capable of projecting theevolution of RSL on decadal to centennial time scales have relied on offline interactions between RSL andice sheets. In particular, grounding line and calving front dynamics have not been modeled in a way that istightly coupled with Elasto-Static Adjustment (ESA) and/or Glacial-Isostatic Adjustment (GIA). Here, we presenta new simulation of the entire Earth System in which both Greenland and Antarctica ice sheets are tightly coupledto an RSL model that includes both ESA and GIA at resolutions and time scales compatible with processes suchas grounding line dynamics for Antarctica ice shelves and calving front dynamics for Greenland marine-terminatingglaciers. The simulations rely on the Ice Sheet System Model (ISSM) and show the impact of higher-orderice flow dynamics and coupling feedbacks between ice flow and RSL. We quantify the exact impact of ESA andGIA inclusion on grounding line evolution for large ice shelves such as the Ronne and Ross ice shelves, as well asthe Agasea Embayment ice streams, and demonstate how offline vs online RSL simulations diverge in the long run,and the consequences for predictions of sea-level rise.This work was performed at the California Institute of Technology's Jet Propulsion Laboratory undera contract with the National Aeronautics and Space Administration's Cryosphere Science Program.

  13. PM2.5 Population Exposure in New Delhi Using a Probabilistic Simulation Framework.

    PubMed

    Saraswat, Arvind; Kandlikar, Milind; Brauer, Michael; Srivastava, Arun

    2016-03-15

    This paper presents a Geographical Information System (GIS) based probabilistic simulation framework to estimate PM2.5 population exposure in New Delhi, India. The framework integrates PM2.5 output from spatiotemporal LUR models and trip distribution data using a Gravity model based on zonal data for population, employment and enrollment in educational institutions. Time-activity patterns were derived from a survey of randomly sampled individuals (n = 1012) and in-vehicle exposure was estimated using microenvironmental monitoring data based on field measurements. We simulated population exposure for three different scenarios to capture stay-at-home populations (Scenario 1), working population exposed to near-road concentrations during commutes (Scenario 2), and the working population exposed to on-road concentrations during commutes (Scenario 3). Simulated annual average levels of PM2.5 exposure across the entire city were very high, and particularly severe in the winter months: ∼200 μg m(-3) in November, roughly four times higher compared to the lower levels in the monsoon season. Mean annual exposures ranged from 109 μg m(-3) (IQR: 97-120 μg m(-3)) for Scenario 1, to 121 μg m(-3) (IQR: 110-131 μg m(-3)), and 125 μg m(-3) (IQR: 114-136 μ gm(-3)) for Scenarios 2 and 3 respectively. Ignoring the effects of mobility causes the average annual PM2.5 population exposure to be underestimated by only 11%.

  14. Variational mixed quantum/semiclassical simulation of dihalogen guest and rare-gas solid host dynamics

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaolu; Cina, Jeffrey A.

    2014-07-01

    A variational mixed quantum-semiclassical theory for the internal nuclear dynamics of a small molecule and the induced small-amplitude coherent motion of a low-temperature host medium is developed, tested, and used to simulate the temporal evolution of nonstationary states of the internal molecular and surrounding medium degrees of freedom. In this theory, termed the Fixed Vibrational Basis/Gaussian Bath (FVB/GB) method, the system is treated fully quantum mechanically while Gaussian wave packets are used for the bath degrees of freedom. An approximate time-dependent wave function of the entire model is obtained instead of just a reduced system density matrix, so the theory enables the analysis of the entangled system and bath dynamics that ensues following initial displacement of the internal-molecular (system) coordinate from its equilibrium position. The norm- and energy-conserving properties of the propagation of our trial wave function are natural consequences of the Dirac-Frenkel-McLachlan variational principle. The variational approach also stabilizes the time evolution in comparison to the same ansatz propagated under a previously employed locally quadratic approximation to the bath potential and system-bath interaction terms in the bath-parameter equations of motion. Dynamics calculations are carried out for molecular iodine in a 2D krypton lattice that reveal both the time-course of vibrational decoherence and the details of host-atom motion accompanying energy dissipation and dephasing. This work sets the stage for the comprehensive simulation of ultrafast time-resolved optical experiments on small molecules in low-temperature solids.

  15. Comparison of region-of-interest-averaged and pixel-averaged analysis of DCE-MRI data based on simulations and pre-clinical experiments

    NASA Astrophysics Data System (ADS)

    He, Dianning; Zamora, Marta; Oto, Aytekin; Karczmar, Gregory S.; Fan, Xiaobing

    2017-09-01

    Differences between region-of-interest (ROI) and pixel-by-pixel analysis of dynamic contrast enhanced (DCE) MRI data were investigated in this study with computer simulations and pre-clinical experiments. ROIs were simulated with 10, 50, 100, 200, 400, and 800 different pixels. For each pixel, a contrast agent concentration as a function of time, C(t), was calculated using the Tofts DCE-MRI model with randomly generated physiological parameters (K trans and v e) and the Parker population arterial input function. The average C(t) for each ROI was calculated and then K trans and v e for the ROI was extracted. The simulations were run 100 times for each ROI with new K trans and v e generated. In addition, white Gaussian noise was added to C(t) with 3, 6, and 12 dB signal-to-noise ratios to each C(t). For pre-clinical experiments, Copenhagen rats (n  =  6) with implanted prostate tumors in the hind limb were used in this study. The DCE-MRI data were acquired with a temporal resolution of ~5 s in a 4.7 T animal scanner, before, during, and after a bolus injection (<5 s) of Gd-DTPA for a total imaging duration of ~10 min. K trans and v e were calculated in two ways: (i) by fitting C(t) for each pixel, and then averaging the pixel values over the entire ROI, and (ii) by averaging C(t) over the entire ROI, and then fitting averaged C(t) to extract K trans and v e. The simulation results showed that in heterogeneous ROIs, the pixel-by-pixel averaged K trans was ~25% to ~50% larger (p  <  0.01) than the ROI-averaged K trans. At higher noise levels, the pixel-averaged K trans was greater than the ‘true’ K trans, but the ROI-averaged K trans was lower than the ‘true’ K trans. The ROI-averaged K trans was closer to the true K trans than pixel-averaged K trans for high noise levels. In pre-clinical experiments, the pixel-by-pixel averaged K trans was ~15% larger than the ROI-averaged K trans. Overall, with the Tofts model, the extracted physiological parameters from the pixel-by-pixel averages were larger than the ROI averages. These differences were dependent on the heterogeneity of the ROI.

  16. The simulators: truth and power in the psychiatry of José Ingenieros.

    PubMed

    Caponi, Sandra

    2016-01-01

    Using Michel Foucault's lectures on "Psychiatric power" as its starting point, this article analyzes the book Simulación de la locura (The simulation of madness), published in 1903 by the Argentine psychiatrist José Ingenieros. Foucault argues that the problem of simulation permeates the entire history of modern psychiatry. After initial analysis of José Ingenieros's references to the question of simulation in the struggle for existence, the issue of simulation in pathological states in general is examined, and lastly the simulation of madness and the problem of degeneration. Ingenieros participates in the epistemological and political struggle that took place between experts-psychiatrists and simulators over the question of truth.

  17. Method matters: impact of in-scenario instruction on simulation-based teamwork training.

    PubMed

    Escher, Cecilia; Rystedt, Hans; Creutzfeldt, Johan; Meurling, Lisbet; Nyström, Sofia; Dahlberg, Johanna; Edelbring, Samuel; Nordahl Amorøe, Torben; Hult, Håkan; Felländer-Tsai, Li; Abrandt-Dahlgren, Madeleine

    2017-01-01

    The rationale for introducing full-scale patient simulators in training to improve patient safety is to recreate clinical situations in a realistic setting. Although high-fidelity simulators mimic a wide range of human features, simulators differ from the body of a sick patient. The gap between the simulator and the human body implies a need for facilitators to provide information to help participants understand scenarios. The authors aimed at describing different methods that facilitators in our dataset used to provide such extra scenario information and how the different methods to convey information affected how scenarios played out. A descriptive qualitative study was conducted to examine the variation of methods to deliver extra scenario information to participants. A multistage approach was employed. The authors selected film clips from a shared database of 31 scenarios from three participating simulation centers. A multidisciplinary research team performed a collaborative analysis of representative film clips focusing on the interplay between participants, facilitators, and the physical environment. After that, the entire material was revisited to further examine and elaborate the initial findings. The material displayed four distinct methods for facilitators to convey information to participants in simulation-based teamwork training. The choice of method had impact on the participating teams regarding flow of work, pace, and team communication. Facilitators' close access to the teams' activities when present in the simulation suite, either embodied or disembodied in the simulation, facilitated the timing for providing information, which was critical for maintaining the flow of activities in the scenario. The mediation of information by a loudspeaker or an earpiece from the adjacent operator room could be disturbing for team communication. In-scenario instruction is an essential component of simulation-based teamwork training that has been largely overlooked in previous research. The ways in which facilitators convey information about the simulated patient have the potential to shape the simulation activities and thereby serve different learning goals. Although immediate timing to maintain an adequate pace is necessary for professionals to engage in training of medical emergencies, novices may gain from a slower tempo to train complex clinical team tasks systematically.

  18. Advancing Physically-Based Flow Simulations of Alluvial Systems Through Atmospheric Noble Gases and the Novel 37Ar Tracer Method

    NASA Astrophysics Data System (ADS)

    Schilling, Oliver S.; Gerber, Christoph; Partington, Daniel J.; Purtschert, Roland; Brennwald, Matthias S.; Kipfer, Rolf; Hunkeler, Daniel; Brunner, Philip

    2017-12-01

    To provide a sound understanding of the sources, pathways, and residence times of groundwater water in alluvial river-aquifer systems, a combined multitracer and modeling experiment was carried out in an important alluvial drinking water wellfield in Switzerland. 222Rn, 3H/3He, atmospheric noble gases, and the novel 37Ar-method were used to quantify residence times and mixing ratios of water from different sources. With a half-life of 35.1 days, 37Ar allowed to successfully close a critical observational time gap between 222Rn and 3H/3He for residence times of weeks to months. Covering the entire range of residence times of groundwater in alluvial systems revealed that, to quantify the fractions of water from different sources in such systems, atmospheric noble gases and helium isotopes are tracers suited for end-member mixing analysis. A comparison between the tracer-based mixing ratios and mixing ratios simulated with a fully-integrated, physically-based flow model showed that models, which are only calibrated against hydraulic heads, cannot reliably reproduce mixing ratios or residence times of alluvial river-aquifer systems. However, the tracer-based mixing ratios allowed the identification of an appropriate flow model parametrization. Consequently, for alluvial systems, we recommend the combination of multitracer studies that cover all relevant residence times with fully-coupled, physically-based flow modeling to better characterize the complex interactions of river-aquifer systems.

  19. Evaluation of dual multi-mission space exploration vehicle operations during simulated planetary surface exploration

    NASA Astrophysics Data System (ADS)

    Abercromby, Andrew F. J.; Gernhardt, Michael L.; Jadwick, Jennifer

    2013-10-01

    IntroductionA pair of small pressurized rovers (multi-mission space exploration vehicles, or MMSEVs) is at the center of the Global Point-of-Departure architecture for future human lunar exploration. Simultaneous operation of multiple crewed surface assets should maximize productive crew time, minimize overhead, and preserve contingency return paths. MethodsA 14-day mission simulation was conducted in the Arizona desert as part of NASA's 2010 Desert Research and Technology Studies (DRATS) field test. The simulation involved two MMSEV earth-gravity prototypes performing geological exploration under varied operational modes affecting both the extent to which the MMSEVs must maintain real-time communications with the mission control center (Continuous [CC] versus Twice-a-Day [2/D]) and their proximity to each other (Lead-and-Follow [L&F] versus Divide-and-Conquer [D&C]). As part of a minimalist lunar architecture, no communication relay satellites were assumed. Two-person crews (an astronaut and a field geologist) operated each MMSEV, day and night, throughout the entire 14-day mission, only leaving via the suit ports to perform simulated extravehicular activities. Metrics and qualitative observations enabled evaluation of the extent to which the operating modes affected productivity and scientific data quality (SDQ). Results and discussionSDQ was greater during CC mode than during 2/D mode; metrics showed a marginal increase while qualitative assessments suggested a practically significant difference. For the communications architecture evaluated, significantly more crew time (14% per day) was required to maintain communications during D&C than during L&F (5%) or 2/D (2%), increasing the time required to complete all traverse objectives. Situational awareness of the other vehicle's location, activities, and contingency return constraints were qualitatively enhanced during L&F and 2/D modes due to line-of-sight and direct MMSEV-to-MMSEV communication. Future testing will evaluate approaches to operating without real-time space-to-earth communications and will include quantitative evaluation and comparison of the efficacy of mission operations, science operations, and public outreach operations.

  20. New methodology for dynamic lot dispatching

    NASA Astrophysics Data System (ADS)

    Tai, Wei-Herng; Wang, Jiann-Kwang; Lin, Kuo-Cheng; Hsu, Yi-Chin

    1994-09-01

    This paper presents a new dynamic dispatching rule to improve delivery. The dynamic dispatching rule named `SLACK and OTD (on time delivery)' is developed for focusing on due date and target cycle time under the environment of IC manufacturing. This idea uses traditional SLACK policy to control long term due date and new OTD policy to reflect the short term stage queue time. Through the fuzzy theory, these two policies are combined as the dispatching controller to define the lot priority in the entire production line. Besides, the system would automatically update the lot priority according to the current line situation. Since the wafer dispatching used to be controlled by critical ratio that indicates the low customer satisfaction. And the overall slack time in the front end of the process is greater compared to that in the rear end of the process which reveals that the machines in the rear end are overloaded by rush orders. When SLACK and OTD are used the due date control has been gradually improved. The wafer with either a long stage queue time or urgent due date will be pushed through the overall production line instead of jammed in the front end. A demand pull system is also developed to satisfy not only due date but also the quantity of monthly demand. The SLACK and OTD rule has been implemented in Taiwan Semiconductor Manufacturing Company for eight months with beneficial results. In order to clearly monitor the SLACK and OTD policy, a method called box chart is generated to simulate the entire production system. From the box chart, we can not only monitor the result of decision policy but display the production situation on the density figure. The production cycle time and delivery situation can also be investigated.

  1. Visualization and Analysis of Climate Simulation Performance Data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg

    2015-04-01

    Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and solutions that greatly aided our understanding. The software employed is based on Avizo Green, ParaView and SimVis, as well as own developed software extensions.

  2. Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback.

    PubMed

    Lemole, G Michael; Banerjee, P Pat; Luciano, Cristian; Neckrysh, Sergey; Charbel, Fady T

    2007-07-01

    Mastery of the neurosurgical skill set involves many hours of supervised intraoperative training. Convergence of political, economic, and social forces has limited neurosurgical resident operative exposure. There is need to develop realistic neurosurgical simulations that reproduce the operative experience, unrestricted by time and patient safety constraints. Computer-based, virtual reality platforms offer just such a possibility. The combination of virtual reality with dynamic, three-dimensional stereoscopic visualization, and haptic feedback technologies makes realistic procedural simulation possible. Most neurosurgical procedures can be conceptualized and segmented into critical task components, which can be simulated independently or in conjunction with other modules to recreate the experience of a complex neurosurgical procedure. We use the ImmersiveTouch (ImmersiveTouch, Inc., Chicago, IL) virtual reality platform, developed at the University of Illinois at Chicago, to simulate the task of ventriculostomy catheter placement as a proof-of-concept. Computed tomographic data are used to create a virtual anatomic volume. Haptic feedback offers simulated resistance and relaxation with passage of a virtual three-dimensional ventriculostomy catheter through the brain parenchyma into the ventricle. A dynamic three-dimensional graphical interface renders changing visual perspective as the user's head moves. The simulation platform was found to have realistic visual, tactile, and handling characteristics, as assessed by neurosurgical faculty, residents, and medical students. We have developed a realistic, haptics-based virtual reality simulator for neurosurgical education. Our first module recreates a critical component of the ventriculostomy placement task. This approach to task simulation can be assembled in a modular manner to reproduce entire neurosurgical procedures.

  3. Recent progress in plasmonic colour filters for image sensor and multispectral applications

    NASA Astrophysics Data System (ADS)

    Pinton, Nadia; Grant, James; Choubey, Bhaskar; Cumming, David; Collins, Steve

    2016-04-01

    Using nanostructured thin metal films as colour filters offers several important advantages, in particular high tunability across the entire visible spectrum and some of the infrared region, and also compatibility with conventional CMOS processes. Since 2003, the field of plasmonic colour filters has evolved rapidly and several different designs and materials, or combination of materials, have been proposed and studied. In this paper we present a simulation study for a single- step lithographically patterned multilayer structure able to provide competitive transmission efficiencies above 40% and contemporary FWHM of the order of 30 nm across the visible spectrum. The total thickness of the proposed filters is less than 200 nm and is constant for every wavelength, unlike e.g. resonant cavity-based filters such as Fabry-Perot that require a variable stack of several layers according to the working frequency, and their passband characteristics are entirely controlled by changing the lithographic pattern. It will also be shown that a key to obtaining narrow-band optical response lies in the dielectric environment of a nanostructure and that it is not necessary to have a symmetric structure to ensure good coupling between the SPPs at the top and bottom interfaces. Moreover, an analytical method to evaluate the periodicity, given a specific structure and a desirable working wavelength, will be proposed and its accuracy demonstrated. This method conveniently eliminate the need to optimize the design of a filter numerically, i.e. by running several time-consuming simulations with different periodicities.

  4. Diagnosis of inconsistencies in multi-year gridded precipitation data over mountainous areas and related impacts on hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Mizukami, N.; Smith, M. B.

    2010-12-01

    It is common for the error characteristics of long-term precipitation data to change over time due to various factors such as gauge relocation and changes in data processing methods. The temporal consistency of precipitation data error characteristics is as important as data accuracy itself for hydrologic model calibration and subsequent use of the calibrated model for streamflow prediction. In mountainous areas, the generation of precipitation grids relies on sparse gage networks, the makeup of which often varies over time. This causes a change in error characteristics of the long-term precipitation data record. We will discuss the diagnostic analysis of the consistency of gridded precipitation time series and illustrate the adverse effect of inconsistent precipitation data on a hydrologic model simulation. We used hourly 4 km gridded precipitation time series over a mountainous basin in the Sierra Nevada Mountains of California from October 1988 through September 2006. The basin is part of the broader study area that served as the focus of the second phase of the Distributed Model Intercomparison Project (DMIP-2), organized by the U.S. National Weather Service (NWS) of the National Oceanographic and Atmospheric Administration (NOAA). To check the consistency of the gridded precipitation time series, double mass analysis was performed using single pixel and basin mean areal precipitation (MAP) values derived from gridded DMIP-2 and Parameter-Elevation Regressions on Independent Slopes Model (PRISM) precipitation data. The analysis leads to the conclusion that over the entire study time period, a clear change in error characteristics in the DMIP-2 data occurred in the beginning of 2003. This matches the timing of one of the major gage network changes. The inconsistency of two MAP time series computed from the gridded precipitation fields over two elevation zones was corrected by adjusting hourly values based on the double mass analysis. We show that model simulations using the adjusted MAP data produce improved stream flow compared to simulations using the inconsistent MAP input data.

  5. Measurement of Meteor Impact Experiments Using Three-Component Particle Image Velocimetry

    NASA Technical Reports Server (NTRS)

    Heineck, James T.; Schultz, Peter H.

    2002-01-01

    The study of hypervelocity impacts has been aggressively pursued for more than 30 years at Ames as a way to simulate meteoritic impacts. Development of experimental methods coupled with new perspectives over this time has greatly improved the understanding of the basic physics and phenomenology of the impact process. These fundamental discoveries have led to novel methods for identifying impact craters and features in craters on both Earth and other planetary bodies. Work done at the Ames Vertical Gun Range led to the description of the mechanics of the Chicxualub crater (a.k.a. K-T crater) on the Yucatan Peninsula, widely considered to be the smoking gun impact that brought an end to the dinosaur era. This is the first attempt in the world to apply three-component particle image velocimetry (3-D PIV) to measure the trajectory of the entire ejecta curtain simultaneously with the fluid structure resulting from impact dynamics. The science learned in these experiments will build understanding in the entire impact process by simultaneously measuring both ejecta and atmospheric mechanics.

  6. A simple measurement method of molecular relaxation in a gas by reconstructing acoustic velocity dispersion

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Zhang, Xiangqun; Li, Caiyun

    2018-01-01

    Recently, a decomposition method of acoustic relaxation absorption spectra was used to capture the entire molecular multimode relaxation process of gas. In this method, the acoustic attenuation and phase velocity were measured jointly based on the relaxation absorption spectra. However, fast and accurate measurements of the acoustic attenuation remain challenging. In this paper, we present a method of capturing the molecular relaxation process by only measuring acoustic velocity, without the necessity of obtaining acoustic absorption. The method is based on the fact that the frequency-dependent velocity dispersion of a multi-relaxation process in a gas is the serial connection of the dispersions of interior single-relaxation processes. Thus, one can capture the relaxation times and relaxation strengths of N decomposed single-relaxation dispersions to reconstruct the entire multi-relaxation dispersion using the measurements of acoustic velocity at 2N  +  1 frequencies. The reconstructed dispersion spectra are in good agreement with experimental data for various gases and mixtures. The simulations also demonstrate the robustness of our reconstructive method.

  7. A Hydrological Modeling Framework for Flood Risk Assessment for Japan

    NASA Astrophysics Data System (ADS)

    Ashouri, H.; Chinnayakanahalli, K.; Chowdhary, H.; Sen Gupta, A.

    2016-12-01

    Flooding has been the most frequent natural disaster that claims lives and imposes significant economic losses to human societies worldwide. Japan, with an annual rainfall of up to approximately 4000 mm is extremely vulnerable to flooding. The focus of this research is to develop a macroscale hydrologic model for simulating flooding toward an improved understanding and assessment of flood risk across Japan. The framework employs a conceptual hydrological model, known as the Probability Distributed Model (PDM), as well as the Muskingum-Cunge flood routing procedure for simulating streamflow. In addition, a Temperature-Index model is incorporated to account for snowmelt and its contribution to streamflow. For an efficient calibration of the model, in terms of computational timing and convergence of the parameters, a set of A Priori parameters is obtained based on the relationships between the model parameters and the physical properties of watersheds. In this regard, we have implemented a particle tracking algorithm and a statistical model which use high resolution Digital Terrain Models to estimate different time related parameters of the model such as time to peak of the unit hydrograph. In addition, global soil moisture and depth data are used to generate A Priori estimation of maximum soil moisture capacity, an important parameter of the PDM model. Once the model is calibrated, its performance is examined during the Typhoon Nabi which struck Japan in September 2005 and caused severe flooding throughout the country. The model is also validated for the extreme precipitation event in 2012 which affected Kyushu. In both cases, quantitative measures show that simulated streamflow depicts good agreement with gauge-based observations. The model is employed to simulate thousands of possible flood events for the entire Japan which makes a basis for a comprehensive flood risk assessment and loss estimation for the flood insurance industry.

  8. Procedural wound geometry and blood flow generation for medical training simulators

    NASA Astrophysics Data System (ADS)

    Aras, Rifat; Shen, Yuzhong; Li, Jiang

    2012-02-01

    Efficient application of wound treatment procedures is vital in both emergency room and battle zone scenes. In order to train first responders for such situations, physical casualty simulation kits, which are composed of tens of individual items, are commonly used. Similar to any other training scenarios, computer simulations can be effective means for wound treatment training purposes. For immersive and high fidelity virtual reality applications, realistic 3D models are key components. However, creation of such models is a labor intensive process. In this paper, we propose a procedural wound geometry generation technique that parameterizes key simulation inputs to establish the variability of the training scenarios without the need of labor intensive remodeling of the 3D geometry. The procedural techniques described in this work are entirely handled by the graphics processing unit (GPU) to enable interactive real-time operation of the simulation and to relieve the CPU for other computational tasks. The visible human dataset is processed and used as a volumetric texture for the internal visualization of the wound geometry. To further enhance the fidelity of the simulation, we also employ a surface flow model for blood visualization. This model is realized as a dynamic texture that is composed of a height field and a normal map and animated at each simulation step on the GPU. The procedural wound geometry and the blood flow model are applied to a thigh model and the efficiency of the technique is demonstrated in a virtual surgery scene.

  9. Performance Summary of the 2006 Community Multiscale Air Quality (CMAQ) Simulation for the AQMEII Project: North American Application

    EPA Science Inventory

    The CMAQ modeling system has been used to simulate the CONUS using 12-km by 12-km horizontal grid spacing for the entire year of 2006 as part of the Air Quality Model Evaluation International initiative (AQMEII). The operational model performance for O3 and PM2.5<...

  10. A simulation environment for assisting system design of coherent laser doppler wind sensor for active wind turbine pitch control

    NASA Astrophysics Data System (ADS)

    Shinohara, Leilei; Pham Tran, Tuan Anh; Beuth, Thorsten; Umesh Babu, Harsha; Heussner, Nico; Bogatscher, Siegwart; Danilova, Svetlana; Stork, Wilhelm

    2013-05-01

    In order to assist a system design of laser coherent Doppler wind sensor for active pitch control of wind turbine systems (WTS), we developed a numerical simulation environment for modeling and simulation of the sensor system. In this paper we present this simulation concept. In previous works, we have shown the general idea and the possibility of using a low cost coherent laser Doppler wind sensing system for an active pitch control of WTS in order to achieve a reduced mechanical stress, increase the WTS lifetime and therefore reduce the electricity price from wind energy. Such a system is based on a 1.55μm Continuous-Wave (CW) laser plus an erbium-doped fiber amplifier (EDFA) with an output power of 1W. Within this system, an optical coherent detection method is chosen for the Doppler frequency measurement in megahertz range. A comparatively low cost short coherent length laser with a fiber delay line is used for achieving a multiple range measurement. In this paper, we show the current results on the improvement of our simulation by applying a Monte Carlo random generation method for positioning the random particles in atmosphere and extend the simulation to the entire beam penetrated space by introducing a cylindrical co-ordinate concept and meshing the entire volume into small elements in order to achieve a faster calculation and gain more realistic simulation result. In addition, by applying different atmospheric parameters, such as particle sizes and distributions, we can simulate different weather and wind situations.

  11. Comparative Analysis of Disruption Tolerant Network Routing Simulations in the One and NS-3

    DTIC Science & Technology

    2017-12-01

    real systems with less work compared to ns-2. In order to meet the design goals of ns-3, the entire code structure changed to a modular design . As a...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3...Thesis 03-23-2016 to 12-15-2017 4. TITLE AND SUBTITLE COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3 5

  12. Simulation of streamflow, evapotranspiration, and groundwater recharge in the middle Nueces River watershed, south Texas, 1961-2008

    USGS Publications Warehouse

    Dietsch, Benjamin J.; Wehmeyer, Loren L.

    2012-01-01

    Selected results of the model include streamflow yields for the subwatersheds and water-balance information for the Carrizo–Wilcox aquifer outcrop area. For the entire model domain, the area-weighted mean streamflow yield from 1961 to 2008 was 1.12 inches/year. The mean annual rainfall on the outcrop area during the 1961–2008 simulation period was 21.7 inches. Of this rainfall, an annual mean of 20.1 inches (about 93 percent) was simulated as evapotranspiration, 1.2 inches (about 6 percent) was simulated as groundwater recharge, and 0.5 inches (about 2 percent) was simulated as surface runoff.

  13. Case studies on design, simulation and visualization of control and measurement applications using REX control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozana, Stepan, E-mail: stepan.ozana@vsb.cz; Pies, Martin, E-mail: martin.pies@vsb.cz; Docekal, Tomas, E-mail: docekalt@email.cz

    REX Control System is a professional advanced tool for design and implementation of complex control systems that belongs to softPLC category. It covers the entire process starting from simulation of functionality of the application before deployment, through implementation on real-time target, towards analysis, diagnostics and visualization. Basically it consists of two parts: the development tools and the runtime system. It is also compatible with Simulink environment, and the way of implementation of control algorithm is very similar. The control scheme is finally compiled (using RexDraw utility) and uploaded into a chosen real-time target (using RexView utility). There is a widemore » variety of hardware platforms and real-time operating systems supported by REX Control System such as for example Windows Embedded, Linux, Linux/Xenomai deployed on SBC, IPC, PAC, Raspberry Pi and others with many I/O interfaces. It is modern system designed both for measurement and control applications, offering a lot of additional functions concerning data archiving, visualization based on HTML5, and communication standards. The paper will sum up possibilities of its use in educational process, focused on control of case studies of physical models with classical and advanced control algorithms.« less

  14. Molecular dynamics with rigid bodies: Alternative formulation and assessment of its limitations when employed to simulate liquid water

    NASA Astrophysics Data System (ADS)

    Silveira, Ana J.; Abreu, Charlles R. A.

    2017-09-01

    Sets of atoms collectively behaving as rigid bodies are often used in molecular dynamics to model entire molecules or parts thereof. This is a coarse-graining strategy that eliminates degrees of freedom and supposedly admits larger time steps without abandoning the atomistic character of a model. In this paper, we rely on a particular factorization of the rotation matrix to simplify the mechanical formulation of systems containing rigid bodies. We then propose a new derivation for the exact solution of torque-free rotations, which are employed as part of a symplectic numerical integration scheme for rigid-body dynamics. We also review methods for calculating pressure in systems of rigid bodies with pairwise-additive potentials and periodic boundary conditions. Finally, simulations of liquid phases, with special focus on water, are employed to analyze the numerical aspects of the proposed methodology. Our results show that energy drift is avoided for time step sizes up to 5 fs, but only if a proper smoothing is applied to the interatomic potentials. Despite this, the effects of discretization errors are relevant, even for smaller time steps. These errors induce, for instance, a systematic failure of the expected equipartition of kinetic energy between translational and rotational degrees of freedom.

  15. Using triple gamma coincidences with a pixelated semiconductor Compton-PET scanner: a simulation study

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; Chmeissani, M.

    2016-01-01

    The Voxel Imaging PET (VIP) Pathfinder project presents a novel design using pixelated semiconductor detectors for nuclear medicine applications to achieve the intrinsic image quality limits set by physics. The conceptual design can be extended to a Compton gamma camera. The use of a pixelated CdTe detector with voxel sizes of 1 × 1 × 2 mm3 guarantees optimal energy and spatial resolution. However, the limited time resolution of semiconductor detectors makes it impossible to use Time Of Flight (TOF) with VIP PET. TOF is used in order to improve the signal to noise ratio (SNR) by using only the most probable portion of the Line-Of-Response (LOR) instead of its entire length. To overcome the limitation of CdTe time resolution, we present in this article a simulation study using β+-γ emitting isotopes with a Compton-PET scanner. When the β+ annihilates with an electron it produces two gammas which produce a LOR in the PET scanner, while the additional gamma, when scattered in the scatter detector, provides a Compton cone that intersects with the aforementioned LOR. The intersection indicates, within a few mm of uncertainty along the LOR, the origin of the beta-gamma decay. Hence, one can limit the part of the LOR used by the image reconstruction algorithm.

  16. On the performance of updating Stochastic Dynamic Programming policy using Ensemble Streamflow Prediction in a snow-covered region

    NASA Astrophysics Data System (ADS)

    Martin, A.; Pascal, C.; Leconte, R.

    2014-12-01

    Stochastic Dynamic Programming (SDP) is known to be an effective technique to find the optimal operating policy of hydropower systems. In order to improve the performance of SDP, this project evaluates the impact of re-updating the policy at every time step by using Ensemble Streamflow Prediction (ESP). We present a case study of the Kemano's hydropower system on the Nechako River in British Columbia, Canada. Managed by Rio Tinto Alcan (RTA), this system is subject to large streamflow volumes in spring due to important amount of snow depth during the winter season. Therefore, the operating policy should not only maximize production but also minimize the risk of flooding. The hydrological behavior of the system is simulated with CEQUEAU, a distributed and deterministic hydrological model developed by the Institut national de la recherche scientifique - Eau, Terre et Environnement (INRS-ETE) in Quebec, Canada. On each decision time step, CEQUEAU is used to generate ESP scenarios based on historical meteorological sequences and the current state of the hydrological model. These scenarios are used into the SDP to optimize the new release policy for the next time steps. This routine is then repeated over the entire simulation period. Results are compared with those obtained by using SDP on historical inflow scenarios.

  17. Payload training methodology study

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The results of the Payload Training Methodology Study (PTMS) are documented. Methods and procedures are defined for the development of payload training programs to be conducted at the Marshall Space Flight Center Payload Training Complex (PCT) for the Space Station Freedom program. The study outlines the overall training program concept as well as the six methodologies associated with the program implementation. The program concept outlines the entire payload training program from initial identification of training requirements to the development of detailed design specifications for simulators and instructional material. The following six methodologies are defined: (1) The Training and Simulation Needs Assessment Methodology; (2) The Simulation Approach Methodology; (3) The Simulation Definition Analysis Methodology; (4) The Simulator Requirements Standardization Methodology; (5) The Simulator Development Verification Methodology; and (6) The Simulator Validation Methodology.

  18. Distributed Dynamic Host Configuration Protocol (D2HCP)

    PubMed Central

    Villalba, Luis Javier García; Matesanz, Julián García; Orozco, Ana Lucila Sandoval; Díaz, José Duván Márquez

    2011-01-01

    Mobile Ad Hoc Networks (MANETs) are multihop wireless networks of mobile nodes without any fixed or preexisting infrastructure. The topology of these networks can change randomly due to the unpredictable mobility of nodes and their propagation characteristics. In most networks, including MANETs, each node needs a unique identifier to communicate. This work presents a distributed protocol for dynamic node IP address assignment in MANETs. Nodes of a MANET synchronize from time to time to maintain a record of IP address assignments in the entire network and detect any IP address leaks. The proposed stateful autoconfiguration scheme uses the OLSR proactive routing protocol for synchronization and guarantees unique IP addresses under a variety of network conditions, including message losses and network partitioning. Simulation results show that the protocol incurs low latency and communication overhead for IP address assignment. PMID:22163856

  19. Neutral buoyancy test evaluation of hardware and extravehicular activity procedures for on-orbit assembly of a 14 meter precision reflector

    NASA Technical Reports Server (NTRS)

    Heard, Walter L., Jr.; Lake, Mark S.

    1993-01-01

    A procedure that enables astronauts in extravehicular activity (EVA) to perform efficient on-orbit assembly of large paraboloidal precision reflectors is presented. The procedure and associated hardware are verified in simulated Og (neutral buoyancy) assembly tests of a 14 m diameter precision reflector mockup. The test article represents a precision reflector having a reflective surface which is segmented into 37 individual panels. The panels are supported on a doubly curved tetrahedral truss consisting of 315 struts. The entire truss and seven reflector panels were assembled in three hours and seven minutes by two pressure-suited test subjects. The average time to attach a panel was two minutes and three seconds. These efficient assembly times were achieved because all hardware and assembly procedures were designed to be compatible with EVA assembly capabilities.

  20. Distributed Dynamic Host Configuration Protocol (D2HCP).

    PubMed

    Villalba, Luis Javier García; Matesanz, Julián García; Orozco, Ana Lucila Sandoval; Díaz, José Duván Márquez

    2011-01-01

    Mobile Ad Hoc Networks (MANETs) are multihop wireless networks of mobile nodes without any fixed or preexisting infrastructure. The topology of these networks can change randomly due to the unpredictable mobility of nodes and their propagation characteristics. In most networks, including MANETs, each node needs a unique identifier to communicate. This work presents a distributed protocol for dynamic node IP address assignment in MANETs. Nodes of a MANET synchronize from time to time to maintain a record of IP address assignments in the entire network and detect any IP address leaks. The proposed stateful autoconfiguration scheme uses the OLSR proactive routing protocol for synchronization and guarantees unique IP addresses under a variety of network conditions, including message losses and network partitioning. Simulation results show that the protocol incurs low latency and communication overhead for IP address assignment.

  1. A novel toolpath force prediction algorithm using CAM volumetric data for optimizing robotic arthroplasty.

    PubMed

    Kianmajd, Babak; Carter, David; Soshi, Masakazu

    2016-10-01

    Robotic total hip arthroplasty is a procedure in which milling operations are performed on the femur to remove material for the insertion of a prosthetic implant. The robot performs the milling operation by following a sequential list of tool motions, also known as a toolpath, generated by a computer-aided manufacturing (CAM) software. The purpose of this paper is to explain a new toolpath force prediction algorithm that predicts cutting forces, which results in improving the quality and safety of surgical systems. With a custom macro developed in the CAM system's native application programming interface, cutting contact patch volume was extracted from CAM simulations. A time domain cutting force model was then developed through the use of a cutting force prediction algorithm. The second portion validated the algorithm by machining a hip canal in simulated bone using a CNC machine. Average cutting forces were measured during machining using a dynamometer and compared to the values predicted from CAM simulation data using the proposed method. The results showed the predicted forces matched the measured forces in both magnitude and overall pattern shape. However, due to inconsistent motion control, the time duration of the forces was slightly distorted. Nevertheless, the algorithm effectively predicted the forces throughout an entire hip canal procedure. This method provides a fast and easy technique for predicting cutting forces during orthopedic milling by utilizing data within a CAM software.

  2. Hydrogen mobility in transition zone silicates

    NASA Astrophysics Data System (ADS)

    Caracas, R.; Panero, W. R.

    2016-12-01

    Hydrogen defects in mantle silicates adopt a variety of charge-balanced defects, including VMg''+2(H*), VSi''''+4(H*), and VSi'+(Mg+2H*). Constraining the defect mechanism experimentally can be quite difficult, as it relies almost entirely on vibrational spectroscopy whose interpretation can often be controversial. Here we use a computational alternative: we study the above-mentioned defect mechanisms using molecular dynamics simulations based on the density-functional theory, in the VASP implementation. We perform isokinetical NVT simulations over a 1500 - 2500K temperature range using supercells containing 16 equivalent formula units of Mg2SiO4. Our results show that temperature has a tremendous effect on mobility. H is significantly more mobile when incorporated as VMg''+2H* defects than as hydrogarnet defects and that VMg''+2H* defects are more mobile in wadsleyite than ringwoodite. This result is the opposite from the proton conductivity inferences of Yoshino et al. [2008] and Huang et al [2006], as well as the observed increase in electrical conductivity with depth through the transition zone [e.g. Kuvshinov et al, 2005; Olsen 1998]. Over the simulation time of several tens of picoseconds the H travel over several lattice sites. However, during its path it spends a considerable amount of time pinned in the defect sites. The lowest mobility is for the VSi''''+4(H*) defect, where the H atoms remain inside the octahedron from which they replaced the Si.

  3. Influence of Ship Emissions on Urban Air Quality: A Comprehensive Study Using Highly Time-Resolved Online Measurements and Numerical Simulation in Shanghai.

    PubMed

    Liu, Zhanmin; Lu, Xiaohui; Feng, Junlan; Fan, Qianzhu; Zhang, Yan; Yang, Xin

    2017-01-03

    Shanghai has become an international shipping center in the world. In this study, the multiyear measurements and the high resolution air quality model with hourly ship emission inventory were combined to determine the influence of ship emissions on urban Shanghai. The aerosol time-of-flight mass spectrometer (ATOFMS) measurements were carried out at an urban site from April 2009 to January 2013. During the entire sampling time, most of the half-hourly averaged number fractions of primary ship emitted particles varied between 1.0-10.0%. However, the number fraction could reach up to 50% during the ship plume cases. Ship-plume-influenced periods usually occurred in spring and summer. The simulation of Weather Research and Forecasting/Community Multiscale Air Quality model (WRF/CMAQ) with hourly ship emission inventory provided the highly time-resolved concentrations of ship-related air pollutants during a ship plume case. It showed ships could contribute 20-30% (2-7 μg/m 3 ) of the total PM 2.5 within tens of kilometers of coastal and riverside Shanghai during ship-plume-influenced periods. Our results showed that ship emissions have substantial contribution to the air pollution in urban Shanghai. The control measures of ship emission should be taken considering its negative environment and human health effects.

  4. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  5. Replica exchange simulation of reversible folding/unfolding of the Trp-cage miniprotein in explicit solvent: on the structure and possible role of internal water.

    PubMed

    Paschek, Dietmar; Nymeyer, Hugh; García, Angel E

    2007-03-01

    We simulate the folding/unfolding equilibrium of the 20-residue miniprotein Trp-cage. We use replica exchange molecular dynamics simulations of the AMBER94 atomic detail model of the protein explicitly solvated by water, starting from a completely unfolded configuration. We employ a total of 40 replicas, covering the temperature range between 280 and 538 K. Individual simulation lengths of 100 ns sum up to a total simulation time of about 4 micros. Without any bias, we observe the folding of the protein into the native state with an unfolding-transition temperature of about 440 K. The native state is characterized by a distribution of root mean square distances (RMSD) from the NMR data that peaks at 1.8A, and is as low as 0.4A. We show that equilibration times of about 40 ns are required to yield convergence. A folded configuration in the entire extended ensemble is found to have a lifetime of about 31 ns. In a clamp-like motion, the Trp-cage opens up during thermal denaturation. In line with fluorescence quenching experiments, the Trp-residue sidechain gets hydrated when the protein opens up, roughly doubling the number of water molecules in the first solvation shell. We find the helical propensity of the helical domain of Trp-cage rather well preserved even at very high temperatures. In the folded state, we can identify states with one and two buried internal water molecules interconnecting parts of the Trp-cage molecule by hydrogen bonds. The loss of hydrogen bonds of these buried water molecules in the folded state with increasing temperature is likely to destabilize the folded state at elevated temperatures.

  6. Cloud Properties Simulated by a Single-Column Model. Part II: Evaluation of Cumulus Detrainment and Ice-phase Microphysics Using a Cloud Resolving Model

    NASA Technical Reports Server (NTRS)

    Luo, Yali; Krueger, Steven K.; Xu, Kuan-Man

    2005-01-01

    This paper is the second in a series in which kilometer-scale-resolving observations from the Atmospheric Radiation Measurement program and a cloud-resolving model (CRM) are used to evaluate the single-column model (SCM) version of the National Centers for Environmental Prediction Global Forecast System model. Part I demonstrated that kilometer-scale cirrus properties simulated by the SCM significantly differ from the cloud radar observations while the CRM simulation reproduced most of the cirrus properties as revealed by the observations. The present study describes an evaluation, through a comparison with the CRM, of the SCM's representation of detrainment from deep cumulus and ice-phase microphysics in an effort to better understand the findings of Part I. It is found that detrainment occurs too infrequently at a single level at a time in the SCM, although the detrainment rate averaged over the entire simulation period is somewhat comparable to that of the CRM simulation. Relatively too much detrained ice is sublimated when first detrained. Snow falls over too deep of a layer due to the assumption that snow source and sink terms exactly balance within one time step in the SCM. These characteristics in the SCM parameterizations may explain many of the differences in the cirrus properties between the SCM and the observations (or between the SCM and the CRM). A possible improvement for the SCM consists of the inclusion of multiple cumulus cloud types as in the original Arakawa-Schubert scheme, prognostically determining the stratiform cloud fraction and snow mixing ratio. This would allow better representation of the detrainment from deep convection, better coupling of the volume of detrained air with cloud fraction, and better representation of snow field.

  7. Development of a simulation of the surficial groundwater system for the CONUS

    NASA Astrophysics Data System (ADS)

    Zell, W.; Sanford, W. E.

    2016-12-01

    Water resource and environmental managers across the country face a variety of questions involving groundwater availability and/or groundwater transport pathways. Emerging management questions require prediction of groundwater response to changing climate regimes (e.g., how drought-induced water-table recession may degrade near-stream vegetation and result in increased wildfire risks), while existing questions can require identification of current groundwater contributions to surface water (e.g., groundwater linkages between landscape contaminant inputs and receiving streams may help explain in-stream phenomena such as fish intersex). At present, few national-coverage simulation tools exist to help characterize groundwater contributions to receiving streams and predict potential changes in base-flow regimes under changing climate conditions. We will describe the Phase 1 development of a simulation of the water table and shallow groundwater system for the entire CONUS. We use national-scale datasets such as the National Recharge Map and the Map Database for Surficial Materials in the CONUS to develop groundwater flow (MODFLOW) and transport (MODPATH) models that are calibrated against groundwater level and stream elevation data from NWIS and NHD, respectively. Phase 1 includes the development of a national transmissivity map for the surficial groundwater system and examines the impact of model-grid resolution on the simulated steady-state discharge network (and associated recharge areas) and base-flow travel time distributions for different HUC scales. In the course of developing the transmissivity map we show that transmissivity in fractured bedrock systems is dependent on depth to water. Subsequent phases of this work will simulate water table changes at a monthly time step (using MODIS-dependent recharge estimates) and serve as a critical complement to surface-water-focused USGS efforts to provide national coverage hydrologic modeling tools.

  8. Ecological DYnamics Simulation Model - Light (EDYS-L): User’s Guide Version 4.6.4

    DTIC Science & Technology

    2011-08-01

    dead), utilization potential , and competitive success for each specified species (e.g., insects , rodents, native ungulates, livestock, predators...available disturbances. The default native herbivores are insects , rabbits, and deer. While multiple species occur within each category, and... native herbivores ( insects , rabbits, and deer) is simulated as a uniform consumption rate across the entire landscape. The user has the choice of

  9. First results from the IllustrisTNG simulations: the galaxy colour bimodality

    NASA Astrophysics Data System (ADS)

    Nelson, Dylan; Pillepich, Annalisa; Springel, Volker; Weinberger, Rainer; Hernquist, Lars; Pakmor, Rüdiger; Genel, Shy; Torrey, Paul; Vogelsberger, Mark; Kauffmann, Guinevere; Marinacci, Federico; Naiman, Jill

    2018-03-01

    We introduce the first two simulations of the IllustrisTNG project, a next generation of cosmological magnetohydrodynamical simulations, focusing on the optical colours of galaxies. We explore TNG100, a rerun of the original Illustris box, and TNG300, which includes 2 × 25003 resolution elements in a volume 20 times larger. Here, we present first results on the galaxy colour bimodality at low redshift. Accounting for the attenuation of stellar light by dust, we compare the simulated (g - r) colours of 109 < M⋆/M⊙ < 1012.5 galaxies to the observed distribution from the Sloan Digital Sky Survey. We find a striking improvement with respect to the original Illustris simulation, as well as excellent quantitative agreement with the observations, with a sharp transition in median colour from blue to red at a characteristic M⋆ ˜ 1010.5 M⊙. Investigating the build-up of the colour-mass plane and the formation of the red sequence, we demonstrate that the primary driver of galaxy colour transition is supermassive black hole feedback in its low accretion state. Across the entire population the median colour transition time-scale Δtgreen is ˜1.6 Gyr, a value which drops for increasingly massive galaxies. We find signatures of the physical process of quenching: at fixed stellar mass, redder galaxies have lower star formation rates, gas fractions, and gas metallicities; their stellar populations are also older and their large-scale interstellar magnetic fields weaker than in bluer galaxies. Finally, we measure the amount of stellar mass growth on the red sequence. Galaxies with M⋆ > 1011 M⊙ which redden at z < 1 accumulate on average ˜25 per cent of their final z = 0 mass post-reddening; at the same time, ˜18 per cent of such massive galaxies acquire half or more of their final stellar mass while on the red sequence.

  10. Disruption of Giant Molecular Clouds by Massive Star Clusters

    NASA Astrophysics Data System (ADS)

    Harper-Clark, Elizabeth

    The lifetime of a Giant Molecular Cloud (GMC) and the total mass of stars that form within it are crucial to the understanding of star formation rates across a whole galaxy. In particular, the stars within a GMC may dictate its disruption and the quenching of further star formation. Indeed, observations show that the Milky Way contains GMCs with extensive expanding bubbles while the most massive stars are still alive. Simulating entire GMCs is challenging, due to the large variety of physics that needs to be included, and the computational power required to accurately simulate a GMC over tens of millions of years. Using the radiative-magneto-hydrodynamic code Enzo, I have run many simulations of GMCs. I obtain robust results for the fraction of gas converted into stars and the lifetimes of the GMCs: (A) In simulations with no stellar outputs (or "feedback''), clusters form at a rate of 30% of GMC mass per free fall time; the GMCs were not disrupted but contained forming stars. (B) Including ionization gas pressure or radiation pressure into the simulations, both separately and together, the star formation was quenched at between 5% and 21% of the original GMC mass. The clouds were fully disrupted within two dynamical times after the first cluster formed. The radiation pressure contributed the most to the disruption of the GMC and fully quenched star formation even without ionization. (C) Simulations that included supernovae showed that they are not dynamically important to GMC disruption and have only minor effects on subsequent star formation. (D) The inclusion of a few micro Gauss magnetic field across the cloud slightly reduced the star formation rate but accelerated GMC disruption by reducing bubble shell disruption and leaking. These simulations show that new born stars quench further star formation and completely disrupt the parent GMC. The low star formation rate and the short lifetimes of GMCs shown here can explain the low star formation rate across the whole galaxy.

  11. Single-shot T2 mapping using overlapping-echo detachment planar imaging and a deep convolutional neural network.

    PubMed

    Cai, Congbo; Wang, Chao; Zeng, Yiqing; Cai, Shuhui; Liang, Dong; Wu, Yawen; Chen, Zhong; Ding, Xinghao; Zhong, Jianhui

    2018-04-24

    An end-to-end deep convolutional neural network (CNN) based on deep residual network (ResNet) was proposed to efficiently reconstruct reliable T 2 mapping from single-shot overlapping-echo detachment (OLED) planar imaging. The training dataset was obtained from simulations that were carried out on SPROM (Simulation with PRoduct Operator Matrix) software developed by our group. The relationship between the original OLED image containing two echo signals and the corresponding T 2 mapping was learned by ResNet training. After the ResNet was trained, it was applied to reconstruct the T 2 mapping from simulation and in vivo human brain data. Although the ResNet was trained entirely on simulated data, the trained network was generalized well to real human brain data. The results from simulation and in vivo human brain experiments show that the proposed method significantly outperforms the echo-detachment-based method. Reliable T 2 mapping with higher accuracy is achieved within 30 ms after the network has been trained, while the echo-detachment-based OLED reconstruction method took approximately 2 min. The proposed method will facilitate real-time dynamic and quantitative MR imaging via OLED sequence, and deep convolutional neural network has the potential to reconstruct maps from complex MRI sequences efficiently. © 2018 International Society for Magnetic Resonance in Medicine.

  12. Finite element simulation and experimental verification of steel cord extraction of steel cord conveyor belt splice

    NASA Astrophysics Data System (ADS)

    Li, X. G.; Long, X. Y.; Jiang, H. Q.; Long, H. B.

    2018-05-01

    The splice is the weakest part of the entire steel cord conveyor belt. And it occurs steel cord twitch fault frequently. If this fault cannot be dealt with timely and accurately, broken belt accidents would be occurred that affecting the safety of production seriously. In this paper, we investigate the steel cord pullout of the steel cord conveyor belt splice by using ABAQUS software. We selected the strength of steel cord conveyor belt ST630, the same as experiment sample in type specification. The finite element model consists of rubber, steel cord and failure unit. And the failure unit is used to simulate the bonding relationship between the steel cord and the rubber. Mooney-Rivlin hyper-elastic model for rubber was employed in the numerical simulations. The pullout force of length 50.0 mm single steel cord, on both sides of a single steel cord and on both sides of the double steel cords each impacted at steel cord conveyor belt splice were numerically computer and typical results obtained have been validated by experimental result. It shows that the relative error between simulation results and experimental results is within 10% and can be considered that the simulation model is reliable. A new method is provided for studying the steel cord twitch fault of the steel cord conveyor belt splice.

  13. Simulations of turbulent asymptotic suction boundary layers

    NASA Astrophysics Data System (ADS)

    Bobke, Alexandra; Örlü, Ramis; Schlatter, Philipp

    2016-02-01

    A series of large-eddy simulations of a turbulent asymptotic suction boundary layer (TASBL) was performed in a periodic domain, on which uniform suction was applied over a flat plate. Three Reynolds numbers (defined as ratio of free-stream and suction velocity) of Re = 333, 400 and 500 and a variety of domain sizes were considered in temporal simulations in order to investigate the turbulence statistics, the importance of the computational domain size, the arising flow structures as well as temporal development length required to achieve the asymptotic state. The effect of these two important parameters was assessed in terms of their influence on integral quantities, mean velocity, Reynolds stresses, higher order statistics, amplitude modulation and spectral maps. While the near-wall region up to the buffer region appears to scale irrespective of Re and domain size, the parameters of the logarithmic law (i.e. von Kármán and additive coefficient) decrease with increasing Re, while the wake strength decreases with increasing spanwise domain size and vanishes entirely once the spanwise domain size exceeds approximately two boundary-layer thicknesses irrespective of Re. The wake strength also reduces with increasing simulation time. The asymptotic state of the TASBL is characterised by surprisingly large friction Reynolds numbers and inherits features of wall turbulence at numerically high Re. Compared to a turbulent boundary layer (TBL) or a channel flow without suction, the components of the Reynolds-stress tensor are overall reduced, but exhibit a logarithmic increase with decreasing suction rates, i.e. increasing Re. At the same time, the anisotropy is increased compared to canonical wall-bounded flows without suction. The reduced amplitudes in turbulence quantities are discussed in light of the amplitude modulation due to the weakened larger outer structures. The inner peak in the spectral maps is shifted to higher wavelength and the strength of the outer peak is much less than for TBLs. An additional spatial simulation was performed, in order to relate the simulation results to wind tunnel experiments, which - in accordance with the results from the temporal simulation - indicate that a truly TASBL is practically impossible to realise in a wind tunnel. Our unique data set agrees qualitatively with existing literature results for both numerical and experimental studies, and at the same time sheds light on the fact why the asymptotic state could not be established in a wind tunnel experiment, viz. because experimental studies resemble our simulation results from too small simulation boxes or insufficient development times.

  14. Simulation modeling of high-throughput cryopreservation of aquatic germplasm: a case study of blue catfish sperm processing

    PubMed Central

    Hu, E; Liao, T. W.; Tiersch, T. R.

    2013-01-01

    Emerging commercial-level technology for aquatic sperm cryopreservation has not been modeled by computer simulation. Commercially available software (ARENA, Rockwell Automation, Inc. Milwaukee, WI) was applied to simulate high-throughput sperm cryopreservation of blue catfish (Ictalurus furcatus) based on existing processing capabilities. The goal was to develop a simulation model suitable for production planning and decision making. The objectives were to: 1) predict the maximum output for 8-hr workday; 2) analyze the bottlenecks within the process, and 3) estimate operational costs when run for daily maximum output. High-throughput cryopreservation was divided into six major steps modeled with time, resources and logic structures. The modeled production processed 18 fish and produced 1164 ± 33 (mean ± SD) 0.5-ml straws containing one billion cryopreserved sperm. Two such production lines could support all hybrid catfish production in the US and 15 such lines could support the entire channel catfish industry if it were to adopt artificial spawning techniques. Evaluations were made to improve efficiency, such as increasing scale, optimizing resources, and eliminating underutilized equipment. This model can serve as a template for other aquatic species and assist decision making in industrial application of aquatic germplasm in aquaculture, stock enhancement, conservation, and biomedical model fishes. PMID:25580079

  15. Impacts of insect disturbance on the structure, composition, and functioning of oak-pine forests

    NASA Astrophysics Data System (ADS)

    Medvigy, D.; Schafer, K. V.; Clark, K. L.

    2011-12-01

    Episodic disturbance is an essential feature of terrestrial ecosystems, and strongly modulates their structure, composition, and functioning. However, dynamic global vegetation models that are commonly used to make ecosystem and terrestrial carbon budget predictions rarely have an explicit representation of disturbance. One reason why disturbance is seldom included is that disturbance tends to operate on spatial scales that are much smaller than typical model resolutions. In response to this problem, the Ecosystem Demography model 2 (ED2) was developed as a way of tracking the fine-scale heterogeneity arising from disturbances. In this study, we used ED2 to simulate an oak-pine forest that experiences episodic defoliation by gypsy moth (Lymantria dispar L). The model was carefully calibrated against site-level data, and then used to simulate changes in ecosystem composition, structure, and functioning on century time scales. Compared to simulations that include gypsy moth defoliation, we show that simulations that ignore defoliation events lead to much larger ecosystem carbon stores and a larger fraction of deciduous trees relative to evergreen trees. Furthermore, we find that it is essential to preserve the fine-scale nature of the disturbance. Attempts to "smooth out" the defoliation event over an entire grid cells led to large biases in ecosystem structure and functioning.

  16. Experimental verification of a thermal equivalent circuit dynamic model on an extended range electric vehicle battery pack

    NASA Astrophysics Data System (ADS)

    Ramotar, Lokendra; Rohrauer, Greg L.; Filion, Ryan; MacDonald, Kathryn

    2017-03-01

    The development of a dynamic thermal battery model for hybrid and electric vehicles is realized. A thermal equivalent circuit model is created which aims to capture and understand the heat propagation from the cells through the entire pack and to the environment using a production vehicle battery pack for model validation. The inclusion of production hardware and the liquid battery thermal management system components into the model considers physical and geometric properties to calculate thermal resistances of components (conduction, convection and radiation) along with their associated heat capacity. Various heat sources/sinks comprise the remaining model elements. Analog equivalent circuit simulations using PSpice are compared to experimental results to validate internal temperature nodes and heat rates measured through various elements, which are then employed to refine the model further. Agreement with experimental results indicates the proposed method allows for a comprehensive real-time battery pack analysis at little computational expense when compared to other types of computer based simulations. Elevated road and ambient conditions in Mesa, Arizona are simulated on a parked vehicle with varying quiescent cooling rates to examine the effect on the diurnal battery temperature for longer term static exposure. A typical daily driving schedule is also simulated and examined.

  17. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  18. High-performance biocomputing for simulating the spread of contagion over large contact networks

    PubMed Central

    2012-01-01

    Background Many important biological problems can be modeled as contagion diffusion processes over interaction networks. This article shows how the EpiSimdemics interaction-based simulation system can be applied to the general contagion diffusion problem. Two specific problems, computational epidemiology and human immune system modeling, are given as examples. We then show how the graphics processing unit (GPU) within each compute node of a cluster can effectively be used to speed-up the execution of these types of problems. Results We show that a single GPU can accelerate the EpiSimdemics computation kernel by a factor of 6 and the entire application by a factor of 3.3, compared to the execution time on a single core. When 8 CPU cores and 2 GPU devices are utilized, the speed-up of the computational kernel increases to 9.5. When combined with effective techniques for inter-node communication, excellent scalability can be achieved without significant loss of accuracy in the results. Conclusions We show that interaction-based simulation systems can be used to model disparate and highly relevant problems in biology. We also show that offloading some of the work to GPUs in distributed interaction-based simulations can be an effective way to achieve increased intra-node efficiency. PMID:22537298

  19. Stability analysis for a multi-camera photogrammetric system.

    PubMed

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-08-18

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  20. Numerical optimization of the ramp-down phase with the RAPTOR code

    NASA Astrophysics Data System (ADS)

    Teplukhina, Anna; Sauter, Olivier; Felici, Federico; The Tcv Team; The ASDEX-Upgrade Team; The Eurofusion Mst1 Team

    2017-10-01

    The ramp-down optimization goal in this work is defined as the fastest possible decrease of a plasma current while avoiding any disruptions caused by reaching physical or technical limits. Numerical simulations and preliminary experiments on TCV and AUG have shown that a fast decrease of plasma elongation and an adequate timing of the H-L transition during current ramp-down can help to avoid reaching high values of the plasma internal inductance. The RAPTOR code (F. Felici et al., 2012 PPCF 54; F. Felici, 2011 EPFL PhD thesis), developed for real-time plasma control, has been used for an optimization problem solving. Recently the transport model has been extended to include the ion temperature and electron density transport equations in addition to the electron temperature and current density transport equations, increasing the physical applications of the code. The gradient-based models for the transport coefficients (O. Sauter et al., 2014 PPCF 21; D. Kim et al., 2016 PPCF 58) have been implemented to RAPTOR and tested during this work. Simulations of the AUG and TCV entire plasma discharges will be presented. See the author list of S. Coda et al., Nucl. Fusion 57 2017 102011.

  1. Efficiency Benefits Using the Terminal Area Precision Scheduling and Spacing System

    NASA Technical Reports Server (NTRS)

    Thipphavong, Jane; Swenson, Harry N.; Lin, Paul; Seo, Anthony Y.; Bagasol, Leonard N.

    2011-01-01

    NASA has developed a capability for terminal area precision scheduling and spacing (TAPSS) to increase the use of fuel-efficient arrival procedures during periods of traffic congestion at a high-density airport. Sustained use of fuel-efficient procedures throughout the entire arrival phase of flight reduces overall fuel burn, greenhouse gas emissions and noise pollution. The TAPSS system is a 4D trajectory-based strategic planning and control tool that computes schedules and sequences for arrivals to facilitate optimal profile descents. This paper focuses on quantifying the efficiency benefits associated with using the TAPSS system, measured by reduction of level segments during aircraft descent and flight distance and time savings. The TAPSS system was tested in a series of human-in-the-loop simulations and compared to current procedures. Compared to the current use of the TMA system, simulation results indicate a reduction of total level segment distance by 50% and flight distance and time savings by 7% in the arrival portion of flight (200 nm from the airport). The TAPSS system resulted in aircraft maintaining continuous descent operations longer and with more precision, both achieved under heavy traffic demand levels.

  2. Theoretical and experimental investigations of coincidences in Poisson distributed pulse trains and spectral distortion caused by pulse pileup

    NASA Astrophysics Data System (ADS)

    Bristow, Quentin

    1990-03-01

    The occurrence rates of pulse strings, or sequences of pulses with interarrival times less than the resolving time of the pulse-height analysis system used to acquire spectra, are derived from theoretical considerations. Logic circuits were devised to make experimental measurements of multiple pulse string occurrence rates in the output from a scintillation detector over a wide range of count rates. Markov process theory was used to predict state transition rates in the logic circuits, enabling the experimental data to be checked rigorously for conformity with those predicted for a Poisson distribution. No fundamental discrepancies were observed. Monte Carlo simulations, incorporating criteria for pulse pileup inherent in the operation of modern analog to digital converters, were used to generate pileup spectra due to coincidences between two pulses (first order pileup) and three pulses (second order pileup) for different semi-Gaussian pulse shapes. Coincidences between pulses in a single channel produced a basic probability density function spectrum. The use of a flat spectrum showed the first order pileup distorted the spectrum to a linear ramp with a pileup tail. A correction algorithm was successfully applied to correct entire spectra (simulated and real) for first and second order pileups.

  3. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  4. Molecular simulation of the thermodynamic, structural, and vapor-liquid equilibrium properties of neon

    NASA Astrophysics Data System (ADS)

    Vlasiuk, Maryna; Frascoli, Federico; Sadus, Richard J.

    2016-09-01

    The thermodynamic, structural, and vapor-liquid equilibrium properties of neon are comprehensively studied using ab initio, empirical, and semi-classical intermolecular potentials and classical Monte Carlo simulations. Path integral Monte Carlo simulations for isochoric heat capacity and structural properties are also reported for two empirical potentials and one ab initio potential. The isobaric and isochoric heat capacities, thermal expansion coefficient, thermal pressure coefficient, isothermal and adiabatic compressibilities, Joule-Thomson coefficient, and the speed of sound are reported and compared with experimental data for the entire range of liquid densities from the triple point to the critical point. Lustig's thermodynamic approach is formally extended for temperature-dependent intermolecular potentials. Quantum effects are incorporated using the Feynman-Hibbs quantum correction, which results in significant improvement in the accuracy of predicted thermodynamic properties. The new Feynman-Hibbs version of the Hellmann-Bich-Vogel potential predicts the isochoric heat capacity to an accuracy of 1.4% over the entire range of liquid densities. It also predicts other thermodynamic properties more accurately than alternative intermolecular potentials.

  5. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.

  6. The effect of different training exercises on the performance outcome on the da Vinci Skills Simulator.

    PubMed

    Walliczek-Dworschak, U; Schmitt, M; Dworschak, P; Diogo, I; Ecke, A; Mandapathil, M; Teymoortash, A; Güldner, C

    2017-06-01

    Increasing usage of robotic surgery presents surgeons with the question of how to acquire the special skills required. This study aimed to analyze the effect of different exercises on their performance outcomes. This prospective study was conducted on the da Vinci Skills Simulator from December 2014 till August 2015. Sixty robotic novices were included and randomized to three groups of 20 participants each. Each group performed three different exercises with comparable difficulty levels. The exercises were performed three times in a row within two training sessions, with an interval of 1 week in between. On the final training day, two new exercises were added and a questionnaire was completed. Technical metrics of performance (overall score, time to complete, economy of motion, instrument collisions, excessive instrument force, instruments out of view, master work space range, drops, missed targets, misapplied energy time, blood loss and broken vessels) were recorded by the simulator software for further analysis. Training with different exercises led to comparable results in performance metrics for the final exercises among the three groups. A significant skills gain was recorded between the first and last exercises, with improved performance in overall score, time to complete and economy of motion for all exercises in all three groups. As training with different exercises led to comparable results in robotic training, the type of exercise seems to play a minor role in the outcome. For a robotic training curriculum, it might be important to choose exercises with comparable difficulty levels. In addition, it seems to be advantageous to limit the duration of the training to maintain the concentration throughout the entire session.

  7. Rapid and sensitive detection of Zika virus by reverse transcription loop-mediated isothermal amplification.

    PubMed

    Wang, Xuan; Yin, Fenggui; Bi, Yuhai; Cheng, Gong; Li, Jing; Hou, Lidan; Li, Yunlong; Yang, Baozhi; Liu, Wenjun; Yang, Limin

    2016-12-01

    Zika virus (ZIKV) is an arbovirus that recently emerged and has expanded worldwide, causing a global threat and raising international concerns. Current molecular diagnostics, e.g., real-time PCR and reverse transcription PCR (RT-PCR), are time consuming, expensive, and can only be deployed in a laboratory instead of for field diagnostics. This study aimed to develop a one-step reverse transcription loop-mediated isothermal amplification (RT-LAMP) platform showing sensitivity, specificity, and more convenience than previous methods, being easily distributed and implemented. Specific primers were designed and screened to target the entire ZIKV genome. The analytical sensitivity and specificity of the assay were evaluated and compared with traditional PCR and quantitative real-time PCR. Three different simulated clinical sample quick preparation protocols were evaluated to establish a rapid and straightforward treatment procedure for clinical specimens in open field detection. The RT-LAMP assay for detection of ZIKV demonstrated superior specificity and sensitivity compared to traditional PCR at the optimum reaction temperature. For the ZIKV RNA standard, the limit of detection was 20 copies/test. For the simulated ZIKV clinical samples, the limit of detection was 0.02 pfu/test, which was one order of magnitude higher than RT-PCR and similar to real-time PCR. The detection limit of simulated ZIKV specimens prepared using a protease quick processing method was consistent with that of samples prepared using commercial nucleic acid extraction kits, indicating that our ZIKV detection method could be used in point-of-care testing. The RT-LAMP assay had excellent sensitivity and specificity for detecting ZIKV and can be deployed together with a rapid specimen processing method, offering the possibility for ZIKV diagnosis outside of the laboratory. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Tensor network methods for the simulation of open quantum dynamics in multichromophore systems: Application to singlet fission in novel pentacene dimers

    NASA Astrophysics Data System (ADS)

    Chin, Alex

    Singlet fission (SF) is an ultrafast process in which a singlet exciton spontaneously converts into a pair of entangled triplet excitons on neighbouring organic molecules. As a mechanism of multiple exciton generation, it has been suggested as a way to increase the efficiency of organic photovoltaic devices, and its underlying photophysics across a wide range of molecules and materials has attracted significant theoretical attention. Recently, a number of studies using ultrafast nonlinear optics have underscored the importance of intramolecular vibrational dynamics in efficient SF systems, prompting a need for methods capable of simulating open quantum dynamics in the presence of highly structured and strongly coupled environments. Here, a combination of ab initio electronic structure techniques and a new tensor-network methodology for simulating open vibronic dynamics is presented and applied to a recently synthesised dimer of pentacene (DP-Mes). We show that ultrafast (300 fs) SF in this system is driven entirely by symmetry breaking vibrations, and our many-body approach enables the real-time identification and tracking of the ''functional' vibrational dynamics and the role of the ''bath''-like parts of the environment. Deeper analysis of the emerging wave functions points to interesting links between the time at which parts of the environment become relevant to the SF process and the optimal topology of the tensor networks, highlighting the additional insight provided by moving the problem into the natural language of correlated quantum states and how this could lead to simulations of much larger multichromophore systems Supported by The Winton Programme for the Physics of Sustainability.

  9. A non-hydrostatic flat-bottom ocean model entirely based on Fourier expansion

    NASA Astrophysics Data System (ADS)

    Wirth, A.

    2005-01-01

    We show how to implement free-slip and no-slip boundary conditions in a three dimensional Boussinesq flat-bottom ocean model based on Fourier expansion. Our method is inspired by the immersed or virtual boundary technique in which the effect of boundaries on the flow field is modeled by a virtual force field. Our method, however, explicitly depletes the velocity on the boundary induced by the pressure, while at the same time respecting the incompressibility of the flow field. Spurious spatial oscillations remain at a negligible level in the simulated flow field when using our technique and no filtering of the flow field is necessary. We furthermore show that by using the method presented here the residual velocities at the boundaries are easily reduced to a negligible value. This stands in contradistinction to previous calculations using the immersed or virtual boundary technique. The efficiency is demonstrated by simulating a Rayleigh impulsive flow, for which the time evolution of the simulated flow is compared to an analytic solution, and a three dimensional Boussinesq simulation of ocean convection. The second instance is taken form a well studied oceanographic context: A free slip boundary condition is applied on the upper surface, the modeled sea surface, and a no-slip boundary condition to the lower boundary, the modeled ocean floor. Convergence properties of the method are investigated by solving a two dimensional stationary problem at different spatial resolutions. The work presented here is restricted to a flat ocean floor. Extensions of our method to ocean models with a realistic topography are discussed.

  10. Tablet-based cardiac arrest documentation: a pilot study.

    PubMed

    Peace, Jack M; Yuen, Trevor C; Borak, Meredith H; Edelson, Dana P

    2014-02-01

    Conventional paper-based resuscitation transcripts are notoriously inaccurate, often lacking the precision that is necessary for recording a fast-paced resuscitation. The aim of this study was to evaluate whether a tablet computer-based application could improve upon conventional practices for resuscitation documentation. Nurses used either the conventional paper code sheet or a tablet application during simulated resuscitation events. Recorded events were compared to a gold standard record generated from video recordings of the simulations and a CPR-sensing defibrillator/monitor. Events compared included defibrillations, medication deliveries, and other interventions. During the study period, 199 unique interventions were observed in the gold standard record. Of these, 102 occurred during simulations recorded by the tablet application, 78 by the paper code sheet, and 19 during scenarios captured simultaneously by both documentation methods These occurred over 18 simulated resuscitation scenarios, in which 9 nurses participated. The tablet application had a mean sensitivity of 88.0% for all interventions, compared to 67.9% for the paper code sheet (P=0.001). The median time discrepancy was 3s for the tablet, and 77s for the paper code sheet when compared to the gold standard (P<0.001). Similar to prior studies, we found that conventional paper-based documentation practices are inaccurate, often misreporting intervention delivery times or missing their delivery entirely. However, our study also demonstrated that a tablet-based documentation method may represent a means to substantially improve resuscitation documentation quality, which could have implications for resuscitation quality improvement and research. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Confirmation of Elevated Methane Emissions in Utah's Uintah Basin With Ground-Based Observations and a High-Resolution Transport Model

    NASA Astrophysics Data System (ADS)

    Foster, C. S.; Crosman, E. T.; Holland, L.; Mallia, D. V.; Fasoli, B.; Bares, R.; Horel, J.; Lin, J. C.

    2017-12-01

    Large CH4 leak rates have been observed in the Uintah Basin of eastern Utah, an area with over 10,000 active and producing natural gas and oil wells. In this paper, we model CH4 concentrations at four sites in the Uintah Basin and compare the simulated results to in situ observations at these sites during two spring time periods in 2015 and 2016. These sites include a baseline location (Fruitland), two sites near oil wells (Roosevelt and Castlepeak), and a site near natural gas wells (Horsepool). To interpret these measurements and relate observed CH4 variations to emissions, we carried out atmospheric simulations using the Stochastic Time-Inverted Lagrangian Transport model driven by meteorological fields simulated by the Weather Research and Forecasting and High Resolution Rapid Refresh models. These simulations were combined with two different emission inventories: (1) aircraft-derived basin-wide emissions allocated spatially using oil and gas well locations, from the National Oceanic and Atmospheric Administration (NOAA), and (2) a bottom-up inventory for the entire U.S., from the Environmental Protection Agency (EPA). At both Horsepool and Castlepeak, the diurnal cycle of modeled CH4 concentrations was captured using NOAA emission estimates but was underestimated using the EPA inventory. These findings corroborate emission estimates from the NOAA inventory, based on daytime mass balance estimates, and provide additional support for a suggested leak rate from the Uintah Basin that is higher than most other regions with natural gas and oil development.

  12. Traversing the folding pathway of proteins using temperature-aided cascade molecular dynamics with conformation-dependent charges.

    PubMed

    Jani, Vinod; Sonavane, Uddhavesh; Joshi, Rajendra

    2016-07-01

    Protein folding is a multi-micro second time scale event and involves many conformational transitions. Crucial conformational transitions responsible for biological functions of biomolecules are difficult to capture using current state-of-the-art molecular dynamics (MD) simulations. Protein folding, being a stochastic process, witnesses these transitions as rare events. Many new methodologies have been proposed for observing these rare events. In this work, a temperature-aided cascade MD is proposed as a technique for studying the conformational transitions. Folding studies for Engrailed homeodomain and Immunoglobulin domain B of protein A have been carried out. Using this methodology, the unfolded structures with RMSD of 20 Å were folded to a structure with RMSD of 2 Å. Three sets of cascade MD runs were carried out using implicit solvation, explicit solvation, and charge updation scheme. In the charge updation scheme, charges based on the conformation obtained are calculated and are updated in the topology file. In all the simulations, the structure of 2 Å was reached within a few nanoseconds using these methods. Umbrella sampling has been performed using snapshots from the temperature-aided cascade MD simulation trajectory to build an entire conformational transition pathway. The advantage of the method is that the possible pathways for a particular reaction can be explored within a short duration of simulation time and the disadvantage is that the knowledge of the start and end state is required. The charge updation scheme adds the polarization effects in the force fields. This improves the electrostatic interaction among the atoms, which may help the protein to fold faster.

  13. Pentium Pro inside. 1; A treecode at 430 Gigaflops on ASCI Red

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Becker, D. J.; Sterling, T.; Salmon, J. K.; Goda, M. P.

    1997-01-01

    As an entry for the 1997 Gordon Bell performance prize, we present results from two methods of solving the gravitational N-body problem on the Intel Teraflops system at Sandia National Laboratory (ASCI Red). The first method, an O(N2) algorithm, obtained 635 Gigaflops for a 1 million particle problem on 6800 Pentium Pro processors. The second solution method, a tree-code which scales as O(N log N), sustained 170 Gigaflops over a continuous 9.4 hour period on 4096 processors, integrating the motion of 322 million mutually interacting particles in a cosmology simulation, while saving over 100 Gigabytes of raw data. Additionally, the tree-code sustained 430 Gigaflops on 6800 processors for the first 5 time-steps of that simulation. This tree-code solution is approximately 105 times more efficient than the O(N2) algorithm for this problem. As an entry for the 1997 Gordon Bell price/performance prize, we present two calculations from the disciplines of astrophysics and fluid dynamics. The simulations were performed on two 16 Pentium Pro processor Beowulf-class computers (Loki and Hyglac) constructed entirely from commodity personal computer technology, at a cost of roughly $50k each in September, 1996. The price of an equivalent system in August 1997 is less than $30. At Los Alamos, Loki performed a gravitational tree-code N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 879 Mflops over a ten day period, and produced roughly 10 Gbytes of raw data.

  14. Starshade orbital maneuver study for WFIRST

    NASA Astrophysics Data System (ADS)

    Soto, Gabriel; Sinha, Amlan; Savransky, Dmitry; Delacroix, Christian; Garrett, Daniel

    2017-09-01

    The Wide Field Infrared Survey Telescope (WFIRST) mission, scheduled for launch in the mid-2020s will perform exoplanet science via both direct imaging and a microlensing survey. An internal coronagraph is planned to perform starlight suppression for exoplanet imaging, but an external starshade could be used to achieve the required high contrasts with potentially higher throughput. This approach would require a separately-launched occulter spacecraft to be positioned at exact distances from the telescope along the line of sight to a target star system. We present a detailed study to quantify the Δv requirements and feasibility of deploying this additional spacecraft as a means of exoplanet imaging. The primary focus of this study is the fuel use of the occulter while repositioning between targets. Based on its design, the occulter is given an offset distance from the nominal WFIRST halo orbit. Target star systems and look vectors are generated using Exoplanet Open-Source Imaging Simulator (EXOSIMS); a boundary value problem is then solved between successive targets. On average, 50 observations are achievable with randomly selected targets given a 30-day transfer time. Individual trajectories can be optimized for transfer time as well as fuel usage to be used in mission scheduling. Minimizing transfer time reduces the total mission time by up to 4.5 times in some simulations before expending the entire fuel budget. Minimizing Δv can generate starshade missions that achieve over 100 unique observations within the designated mission lifetime of WFIRST.

  15. Real-time hydrological early warning system at national scale for surface water and groundwater with stakeholder involvement

    NASA Astrophysics Data System (ADS)

    He, X.; Stisen, S.; Henriksen, H. J.

    2015-12-01

    Hydrological models are important tools to support decision making in water resource management in the past few decades. Nowadays, frequent occurrence of extreme hydrological events has put focus on development of real-time hydrological modeling and forecasting systems. Among the various types of hydrological models, it is only the rainfall-runoff models for surface water that are commonly used in the online real-time fashion; and there is never a tradition to use integrated hydrological models for both surface water and groundwater with large scale perspective. At the Geological Survey of Denmark and Greenland (GEUS), we have setup and calibrated an integrated hydrological model that covers the entire nation, namely the DK-model. So far, the DK-model has only been used in offline mode for historical and future scenario simulations. Therefore, challenges arise when operating the DK-model in real-time mode due to lack of technical experiences and stakeholder awareness. In the present study, we try to demonstrate the process of bringing the DK-model online while actively involving the opinions of the stakeholders. Although the system is not yet fully operational, a prototype has been finished and presented to the stakeholders which can simulate groundwater levels, streamflow and water content in the root zone with a lead time of 48 hours and refreshed every 6 hours. The active involvement of stakeholders has provided very valuable insights and feedbacks for future improvements.

  16. Monte Carlo simulation of chemistry following radiolysis with TOPAS-nBio.

    PubMed

    Ramos-Méndez, J; Perl, J; Schuemann, J; McNamara, A; Paganetti, H; Faddegon, B

    2018-05-17

    Simulation of water radiolysis and the subsequent chemistry provides important information on the effect of ionizing radiation on biological material. The Geant4 Monte Carlo toolkit has added chemical processes via the Geant4-DNA project. The TOPAS tool simplifies the modeling of complex radiotherapy applications with Geant4 without requiring advanced computational skills, extending the pool of users. Thus, a new extension to TOPAS, TOPAS-nBio, is under development to facilitate the configuration of track-structure simulations as well as water radiolysis simulations with Geant4-DNA for radiobiological studies. In this work, radiolysis simulations were implemented in TOPAS-nBio. Users may now easily add chemical species and their reactions, and set parameters including branching ratios, dissociation schemes, diffusion coefficients, and reaction rates. In addition, parameters for the chemical stage were re-evaluated and updated from those used by default in Geant4-DNA to improve the accuracy of chemical yields. Simulation results of time-dependent and LET-dependent primary yields G x (chemical species per 100 eV deposited) produced at neutral pH and 25 °C by short track-segments of charged particles were compared to published measurements. The LET range was 0.05-230 keV µm -1 . The calculated G x values for electrons satisfied the material balance equation within 0.3%, similar for protons albeit with long calculation time. A smaller geometry was used to speed up proton and alpha simulations, with an acceptable difference in the balance equation of 1.3%. Available experimental data of time-dependent G-values for [Formula: see text] agreed with simulated results within 7%  ±  8% over the entire time range; for [Formula: see text] over the full time range within 3%  ±  4%; for H 2 O 2 from 49%  ±  7% at earliest stages and 3%  ±  12% at saturation. For the LET-dependent G x , the mean ratios to the experimental data were 1.11  ±  0.98, 1.21  ±  1.11, 1.05  ±  0.52, 1.23  ±  0.59 and 1.49  ±  0.63 (1 standard deviation) for [Formula: see text], [Formula: see text], H 2 , H 2 O 2 and [Formula: see text], respectively. In conclusion, radiolysis and subsequent chemistry with Geant4-DNA has been successfully incorporated in TOPAS-nBio. Results are in reasonable agreement with published measured and simulated data.

  17. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE PAGES

    An, Ke; Yuan, Lang; Dial, Laura; ...

    2017-09-11

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  18. Brief, Why the Launch Equipment Test Facility Needs a Laser Tracker

    NASA Technical Reports Server (NTRS)

    Yue, Shiu H.

    2011-01-01

    The NASA Kennedy Space Center Launch Equipment Test Facility (LETF) supports a wide spectrum of testing and development activities. This capability was originally established in the 1970's to allow full-scale qualification of Space Shuttle umbilicals and T-O release mechanisms. The LETF has leveraged these unique test capabilities to evolve into a versatile test and development area that supports the entire spectrum of operational programs at KSC. These capabilities are historically Aerospace related, but can certainly can be adapted for other industries. One of the more unique test fixtures is the Vehicle Motion Simulator or the VMS. The VMS simulates all of the motions that a launch vehicle will experience from the time of its roll-out to the launch pad, through roughly the first X second of launch. The VMS enables the development and qualification testing of umbilical systems in both pre-launch and launch environments. The VMS can be used to verify operations procedures, clearances, disconnect systems performance &margins, and vehicle loads through processing flow motion excursions.

  19. ON HYDRODYNAMIC MOTIONS IN DEAD ZONES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oishi, Jeffrey S.; Mac Low, Mordecai-Mark, E-mail: jsoishi@astro.berkeley.ed, E-mail: mordecai@amnh.or

    We investigate fluid motions near the midplane of vertically stratified accretion disks with highly resistive midplanes. In such disks, the magnetorotational instability drives turbulence in thin layers surrounding a resistive, stable dead zone. The turbulent layers in turn drive motions in the dead zone. We examine the properties of these motions using three-dimensional, stratified, local, shearing-box, non-ideal, magnetohydrodynamical simulations. Although the turbulence in the active zones provides a source of vorticity to the midplane, no evidence for coherent vortices is found in our simulations. It appears that this is because of strong vertical oscillations in the dead zone. By analyzingmore » time series of azimuthally averaged flow quantities, we identify an axisymmetric wave mode particular to models with dead zones. This mode is reduced in amplitude, but not suppressed entirely, by changing the equation of state from isothermal to ideal. These waves are too low frequency to affect sedimentation of dust to the midplane, but may have significance for the gravitational stability of the resulting midplane dust layers.« less

  20. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ke; Yuan, Lang; Dial, Laura

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  1. Dynamic Monte Carlo simulations of radiatively accelerated GRB fireballs

    NASA Astrophysics Data System (ADS)

    Chhotray, Atul; Lazzati, Davide

    2018-05-01

    We present a novel Dynamic Monte Carlo code (DynaMo code) that self-consistently simulates the Compton-scattering-driven dynamic evolution of a plasma. We use the DynaMo code to investigate the time-dependent expansion and acceleration of dissipationless gamma-ray burst fireballs by varying their initial opacities and baryonic content. We study the opacity and energy density evolution of an initially optically thick, radiation-dominated fireball across its entire phase space - in particular during the Rph < Rsat regime. Our results reveal new phases of fireball evolution: a transition phase with a radial extent of several orders of magnitude - the fireball transitions from Γ ∝ R to Γ ∝ R0, a post-photospheric acceleration phase - where fireballs accelerate beyond the photosphere and a Thomson-dominated acceleration phase - characterized by slow acceleration of optically thick, matter-dominated fireballs due to Thomson scattering. We quantify the new phases by providing analytical expressions of Lorentz factor evolution, which will be useful for deriving jet parameters.

  2. Hybrid-PIC Computer Simulation of the Plasma and Erosion Processes in Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Hofer, Richard R.; Katz, Ira; Mikellides, Ioannis G.; Gamero-Castano, Manuel

    2010-01-01

    HPHall software simulates and tracks the time-dependent evolution of the plasma and erosion processes in the discharge chamber and near-field plume of Hall thrusters. HPHall is an axisymmetric solver that employs a hybrid fluid/particle-in-cell (Hybrid-PIC) numerical approach. HPHall, originally developed by MIT in 1998, was upgraded to HPHall-2 by the Polytechnic University of Madrid in 2006. The Jet Propulsion Laboratory has continued the development of HPHall-2 through upgrades to the physical models employed in the code, and the addition of entirely new ones. Primary among these are the inclusion of a three-region electron mobility model that more accurately depicts the cross-field electron transport, and the development of an erosion sub-model that allows for the tracking of the erosion of the discharge chamber wall. The code is being developed to provide NASA science missions with a predictive tool of Hall thruster performance and lifetime that can be used to validate Hall thrusters for missions.

  3. mizer: an R package for multispecies, trait-based and community size spectrum ecological modelling.

    PubMed

    Scott, Finlay; Blanchard, Julia L; Andersen, Ken H

    2014-10-01

    Size spectrum ecological models are representations of a community of individuals which grow and change trophic level. A key emergent feature of these models is the size spectrum; the total abundance of all individuals that scales negatively with size. The models we focus on are designed to capture fish community dynamics useful for assessing the community impacts of fishing.We present mizer , an R package for implementing dynamic size spectrum ecological models of an entire aquatic community subject to fishing. Multiple fishing gears can be defined and fishing mortality can change through time making it possible to simulate a range of exploitation strategies and management options. mizer implements three versions of the size spectrum modelling framework: the community model, where individuals are only characterized by their size; the trait-based model, where individuals are further characterized by their asymptotic size; and the multispecies model where additional trait differences are resolved.A range of plot, community indicator and summary methods are available to inspect the results of the simulations.

  4. Vorticity Transport and Wave Emission in the Protoplanetary Nebula

    NASA Technical Reports Server (NTRS)

    Davis, S. S.; DeVincenzi, Donald (Technical Monitor)

    2001-01-01

    Higher order numerical algorithms (4th order in time, 3rd order in space) are applied to the Euler/Energy equations and are used to examine vorticity transport and wave motion in a non-self gravitating, initially isentropic Keplerian disk. In this talk we will examine the response of the nebula to an isolated vortex with a circulation about equal to the rotation rate of Jupiter. The vortex is located on the 4 AU circle and the nebula is simulated from 1 to 24 AU. We show that the vortex emits pressure-supported density and Rossby-type wave packets before it decays within a few orbits. The acoustic density waves evolve into weak (non entropy preserving) shock waves that propagate over the entire disk. The Rossby waves remain in the vicinity of the initial vortex disturbance, but are rapidly damped. Temporal frequencies and spatial wavenumbers are derived using the simulation data and compared with analytical dispersion relations from the linearized Euler/Energy equations.

  5. Vorticity Transport and Wave Emission In A Protoplanetary Disk

    NASA Technical Reports Server (NTRS)

    Davis, S. S.; Davis, Sanford (Technical Monitor)

    2002-01-01

    Higher order numerical algorithms (4th order in time, 3rd order in space) are applied to the Euler equations and are used to examine vorticity transport and wave motion in a non-self gravitating, initially isentropic Keplerian disk. In this talk we will examine the response of the disk to an isolated vortex with a circulation about equal to the rotation rate of Jupiter. The vortex is located on the 4 AU circle and the nebula is simulated from 1 to 24 AU. We show that the vortex emits pressure-supported density and Rossby-type wave packets before it decays within a few orbits. The acoustic density waves evolve into weak (non entropy preserving) shock waves that propagate over the entire disk. The Rossby waves remain in the vicinity of the initial vortex disturbance, but are rapidly damped. Temporal frequencies and spatial wavenumbers are derived from the nonlinear simulation data and correlated with analytical dispersion relations from the linearized Euler and energy equations.

  6. Monte Carlo simulation of the neutron monitor yield function

    NASA Astrophysics Data System (ADS)

    Mangeard, P.-S.; Ruffolo, D.; Sáiz, A.; Madlee, S.; Nutaro, T.

    2016-08-01

    Neutron monitors (NMs) are ground-based detectors that measure variations of the Galactic cosmic ray flux at GV range rigidities. Differences in configuration, electronics, surroundings, and location induce systematic effects on the calculation of the yield functions of NMs worldwide. Different estimates of NM yield functions can differ by a factor of 2 or more. In this work, we present new Monte Carlo simulations to calculate NM yield functions and perform an absolute (not relative) comparison with the count rate of the Princess Sirindhorn Neutron Monitor (PSNM) at Doi Inthanon, Thailand, both for the entire monitor and for individual counter tubes. We model the atmosphere using profiles from the Global Data Assimilation System database and the Naval Research Laboratory Mass Spectrometer, Incoherent Scatter Radar Extended model. Using FLUKA software and the detailed geometry of PSNM, we calculated the PSNM yield functions for protons and alpha particles. An agreement better than 9% was achieved between the PSNM observations and the simulated count rate during the solar minimum of December 2009. The systematic effect from the electronic dead time was studied as a function of primary cosmic ray rigidity at the top of the atmosphere up to 1 TV. We show that the effect is not negligible and can reach 35% at high rigidity for a dead time >1 ms. We analyzed the response function of each counter tube at PSNM using its actual dead time, and we provide normalization coefficients between count rates for various tube configurations in the standard NM64 design that are valid to within ˜1% for such stations worldwide.

  7. Why do high-redshift galaxies show diverse gas-phase metallicity gradients?

    NASA Astrophysics Data System (ADS)

    Ma, Xiangcheng; Hopkins, Philip F.; Feldmann, Robert; Torrey, Paul; Faucher-Giguère, Claude-André; Kereš, Dušan

    2017-04-01

    Recent spatially resolved observations of galaxies at z ˜ 0.6-3 reveal that high-redshift galaxies show complex kinematics and a broad distribution of gas-phase metallicity gradients. To understand these results, we use a suite of high-resolution cosmological zoom-in simulations from the Feedback in Realistic Environments project, which include physically motivated models of the multiphase interstellar medium, star formation and stellar feedback. Our simulations reproduce the observed diversity of kinematic properties and metallicity gradients, broadly consistent with observations at z ˜ 0-3. Strong negative metallicity gradients only appear in galaxies with a rotating disc, but not all rotationally supported galaxies have significant gradients. Strongly perturbed galaxies with little rotation always have flat gradients. The kinematic properties and metallicity gradient of a high-redshift galaxy can vary significantly on short time-scales, associated with starburst episodes. Feedback from a starburst can destroy the gas disc, drive strong outflows and flatten a pre-existing negative metallicity gradient. The time variability of a single galaxy is statistically similar to the entire simulated sample, indicating that the observed metallicity gradients in high-redshift galaxies reflect the instantaneous state of the galaxy rather than the accretion and growth history on cosmological time-scales. We find weak dependence of metallicity gradient on stellar mass and specific star formation rate (sSFR). Low-mass galaxies and galaxies with high sSFR tend to have flat gradients, likely due to the fact that feedback is more efficient in these galaxies. We argue that it is important to resolve feedback on small scales in order to produce the diverse metallicity gradients observed.

  8. Interactive Exploration and Analysis of Large-Scale Simulations Using Topology-Based Data Segmentation.

    PubMed

    Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B

    2011-09-01

    Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting and analyzing burning cells from a large-scale turbulent combustion simulation. In particular, we show how the statistical analysis enabled by our techniques provides new insight into the combustion process.

  9. Midwest Structural Sciences Center 2009 Annual Report

    DTIC Science & Technology

    2010-08-01

    simulations. Numerical simulations were carried with a single edge notch beam using an ABAQUS user-element subroutine in conjunction with bilinear and...this effort Digital Image Correlation (DIC) has been applied to measure the coefficient of thermal expansion of the nickel-based super alloy...between 30 and 650°C, the thermal expansion coefficient of Hastelloy X was measured over this entire range and found to be in good agreement with

  10. Radiative Feedback of Forming Star Clusters on Their GMC Environments: Theory and Simulation

    NASA Astrophysics Data System (ADS)

    Howard, C. S.; Pudritz, R. E.; Harris, W. E.

    2013-07-01

    Star clusters form from dense clumps within a molecular cloud. Radiation from these newly formed clusters feeds back on their natal molecular cloud through heating and ionization which ultimately stops gas accretion into the cluster. Recent studies suggest that radiative feedback effects from a single cluster may be sufficient to disrupt an entire cloud over a short timescale. Simulating cluster formation on a large scale, however, is computationally demanding due to the high number of stars involved. For this reason, we present a model for representing the radiative output of an entire cluster which involves randomly sampling an initial mass function (IMF) as the cluster accretes mass. We show that this model is able to reproduce the star formation histories of observed clusters. To examine the degree to which radiative feedback shapes the evolution of a molecular cloud, we use the FLASH adaptive-mesh refinement hydrodynamics code to simulate cluster formation in a turbulent cloud. Unlike previous studies, sink particles are used to represent a forming cluster rather than individual stars. Our cluster model is then coupled with a raytracing scheme to treat radiative transfer as the clusters grow in mass. This poster will outline the details of our model and present preliminary results from our 3D hydrodynamical simulations.

  11. Modeling Magma Mixing: Evidence from U-series age dating and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Philipp, R.; Cooper, K. M.; Bergantz, G. W.

    2007-12-01

    Magma mixing and recharge is an ubiquitous process in the shallow crust, which can trigger eruption and cause magma hybridization. Phenocrysts in mixed magmas are recorders for magma mixing and can be studied by in- situ techniques and analyses of bulk mineral separates. To better understand if micro-textural and compositional information reflects local or reservoir-scale events, a physical model for gathering and dispersal of crystals is necessary. We present the results of a combined geochemical and fluid dynamical study of magma mixing processes at Volcan Quizapu, Chile; two large (1846/47 AD and 1932 AD) dacitic eruptions from the same vent area were triggered by andesitic recharge magma and show various degrees of magma mixing. Employing a multiphase numerical fluid dynamic model, we simulated a simple mixing process of vesiculated mafic magma intruded into a crystal-bearing silicic reservoir. This unstable condition leads to overturn and mixing. In a second step we use the velocity field obtained to calculate the flow path of 5000 crystals randomly distributed over the entire system. Those particles mimic the phenocryst response to the convective motion. There is little local relative motion between silicate liquid and crystals due to the high viscosity of the melts and the rapid overturn rate of the system. Of special interest is the crystal dispersal and gathering, which is quantified by comparing the distance at the beginning and end of the simulation for all particle pairs that are initially closer than a length scale chosen between 1 and 10 m. At the start of the simulation, both the resident and new intruding (mafic) magmas have a unique particle population. Depending on the Reynolds number (Re) and the chosen characteristic length scale of different phenocryst-pairs, we statistically describe the heterogeneity of crystal populations on the thin section scale. For large Re (approx. 25) and a short characteristic length scale of particle-pairs, heterogeneity of particle populations is large. After one overturn event, even the "thin section scale" can contain phenocrysts that derive from the entire magmatic system. We combine these results with time scale information from U-series plagioclase age dating. Apparent crystal residence times from the most evolved and therefore least hybridized rocks for the 1846/47 and 1932 eruptions of Volcan Quizapu are about 5000 and about 3000 yrs, respectively. Based on whole rock chemistry as well as textural and crystal-chemical data, both eruptions tapped the same reservoir and therefore should record similar crystal residence times. Instead, the discordance of these two ages can be explained by magma mixing as modeled above, if some young plagioclase derived from the andesitic recharge magma which triggered the 1846/47 AD eruption got mixed into the dacite remaining in the reservoir after eruption, thus lowering the apparent crystal residence time for magma that was evacuated from the reservoir in 1932.

  12. cellGPU: Massively parallel simulations of dynamic vertex models

    NASA Astrophysics Data System (ADS)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  13. Primary T-cell Lymphoma of the Colon

    PubMed Central

    Son, Hee Jung; Rhee, Poong Lyul; Kim, Jae-Jun; Koh, Kwang Choel; Paik, Seong Woon; Rhee, Jong Chul; Koh, Young Hae

    1997-01-01

    A 40-year-old woman had been diagnosed with Crohns disease in September 1994, but later examinations revealed a primary T-cell lymphoma of the colon. Colonoscopic and histological examination showed ulcerative lesions simulating Crohns disease involving the entire colon and the terminal ileum, and she was first diagnosed as having Crohns disease. Differential therapeutic strategies, including corticosteroid, had improved the symptoms which were dominated by abdominal pain. When she visited our institute in April 1995, she presented with bloody stool twice a day, 7kg weight loss in a period of six months and a slightly painful abdomen. Colonoscopic finding showed geographic ulceration on the entire colon, especially rectum and terminal ileum. The histologic examination of specimens from colonoscopic biopsy showed primary peripheral T-cell lymphoma of the colon. Any dense lymphocyte infiltrates seen in the biopsy specimens obtained from lesions simulating ulcerative colitis or Crohns disease should be assessed to exclude intestinal lymphoma PMID:9439161

  14. Simulations of heart mechanics over the cardiac cycle

    NASA Astrophysics Data System (ADS)

    Tavoularis, Stavros; Doyle, Matthew; Bourgault, Yves

    2009-11-01

    This study is concerned with the numerical simulation of blood flow and myocardium motion with fluid-structure interaction of the left ventricle (LV) of a canine heart over the entire cardiac cycle. The LV geometry is modeled as a series of nested prolate ellipsoids and is capped with cylindrical tubes representing the inflow and outflow tracts. The myocardium is modeled as a multi-layered, slightly compressible, transversely isotropic, hyperelastic material, with each layer having different principal directions to approximate the fibrous structure. Blood is modeled as a slightly compressible Newtonian fluid. Blood flow into and out of the LV is driven by left atrial and aortic pressures applied at the distal ends of the inflow and outflow tracts, respectively, along with changes in the stresses in the myocardium caused by time-dependent changes in its material properties, which simulate the cyclic contraction and relaxation of the muscle fibers. Numerical solutions are obtained with the use of a finite element code. The computed temporal and spatial variations of pressure and velocity in the blood and stresses and strains in the myocardium will be discussed and compared to physiological data. The variation of the LV cavity volume over the cardiac cycle will also be discussed.

  15. 3D Realistic Radiative Hydrodynamic Modeling of a Moderate-Mass Star: Effects of Rotation

    NASA Astrophysics Data System (ADS)

    Kitiashvili, Irina; Kosovichev, Alexander G.; Mansour, Nagi N.; Wray, Alan A.

    2018-01-01

    Recent progress in stellar observations opens new perspectives in understanding stellar evolution and structure. However, complex interactions in the turbulent radiating plasma together with effects of magnetic fields and rotation make inferences of stellar properties uncertain. The standard 1D mixing-length-based evolutionary models are not able to capture many physical processes of stellar interior dynamics, but they provide an initial approximation of the stellar structure that can be used to initialize 3D time-dependent radiative hydrodynamics simulations, based on first physical principles, that take into account the effects of turbulence, radiation, and others. In this presentation we will show simulation results from a 3D realistic modeling of an F-type main-sequence star with mass 1.47 Msun, in which the computational domain includes the upper layers of the radiation zone, the entire convection zone, and the photosphere. The simulation results provide new insight into the formation and properties of the convective overshoot region, the dynamics of the near-surface, highly turbulent layer, the structure and dynamics of granulation, and the excitation of acoustic and gravity oscillations. We will discuss the thermodynamic structure, oscillations, and effects of rotation on the dynamics of the star across these layers.

  16. Determination of the effective diffusivity of water in a poly (methyl methacrylate) membrane containing carbon nanotubes using kinetic Monte Carlo simulations.

    PubMed

    Mermigkis, Panagiotis G; Tsalikis, Dimitrios G; Mavrantzas, Vlasis G

    2015-10-28

    A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrix and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, D(eff), of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, D(eff) is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for D(eff) as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on D(eff) (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate times and correlated them with the time needed for penetrant water molecules to explore the available large, fast-diffusing CNT pores before Fickian diffusion is reached.

  17. Determination of the effective diffusivity of water in a poly (methyl methacrylate) membrane containing carbon nanotubes using kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Mermigkis, Panagiotis G.; Tsalikis, Dimitrios G.; Mavrantzas, Vlasis G.

    2015-10-01

    A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrix and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, Deff, of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, Deff is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for Deff as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on Deff (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate times and correlated them with the time needed for penetrant water molecules to explore the available large, fast-diffusing CNT pores before Fickian diffusion is reached.

  18. Preliminary validation of a new methodology for estimating dose reduction protocols in neonatal chest computed radiographs

    NASA Astrophysics Data System (ADS)

    Don, Steven; Whiting, Bruce R.; Hildebolt, Charles F.; Sehnert, W. James; Ellinwood, Jacquelyn S.; Töpfer, Karin; Masoumzadeh, Parinaz; Kraus, Richard A.; Kronemer, Keith A.; Herman, Thomas; McAlister, William H.

    2006-03-01

    The risk of radiation exposure is greatest for pediatric patients and, thus, there is a great incentive to reduce the radiation dose used in diagnostic procedures for children to "as low as reasonably achievable" (ALARA). Testing of low-dose protocols presents a dilemma, as it is unethical to repeatedly expose patients to ionizing radiation in order to determine optimum protocols. To overcome this problem, we have developed a computed-radiography (CR) dose-reduction simulation tool that takes existing images and adds synthetic noise to create realistic images that correspond to images generated with lower doses. The objective of our study was to determine the extent to which simulated, low-dose images corresponded with original (non-simulated) low-dose images. To make this determination, we created pneumothoraces of known volumes in five neonate cadavers and obtained images of the neonates at 10 mR, 1 mR and 0.1 mR (as measured at the cassette plate). The 10-mR exposures were considered "relatively-noise-free" images. We used these 10 mR-images and our simulation tool to create simulated 0.1- and 1-mR images. For the simulated and original images, we identified regions of interest (ROI) of the entire chest, free-in-air region, and liver. We compared the means and standard deviations of the ROI grey-scale values of the simulated and original images with paired t tests. We also had observers rate simulated and original images for image quality and for the presence or absence of pneumothoraces. There was no statistically significant difference in grey-scale-value means nor standard deviations between simulated and original entire chest ROI regions. The observer performance suggests that an exposure >=0.2 mR is required to detect the presence or absence of pneumothoraces. These preliminary results indicate that the use of the simulation tool is promising for achieving ALARA exposures in children.

  19. Effect of Saturation Pressure Difference on Metal–Silicide Nanopowder Formation in Thermal Plasma Fabrication

    PubMed Central

    Shigeta, Masaya; Watanabe, Takayuki

    2016-01-01

    A computational investigation using a unique model and a solution algorithm was conducted, changing only the saturation pressure of one material artificially during nanopowder formation in thermal plasma fabrication, to highlight the effects of the saturation pressure difference between a metal and silicon. The model can not only express any profile of particle size–composition distribution for a metal–silicide nanopowder even with widely ranging sizes from sub-nanometers to a few hundred nanometers, but it can also simulate the entire growth process involving binary homogeneous nucleation, binary heterogeneous co-condensation, and coagulation among nanoparticles with different compositions. Greater differences in saturation pressures cause a greater time lag for co-condensation of two material vapors during the collective growth of the metal–silicide nanopowder. The greater time lag for co-condensation results in a wider range of composition of the mature nanopowder. PMID:28344300

  20. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    NASA Astrophysics Data System (ADS)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    2016-06-01

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20x to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.

  1. Use of Spacecraft Command Language for Advanced Command and Control Applications

    NASA Technical Reports Server (NTRS)

    Mims, Tikiela L.

    2008-01-01

    The purpose of this work is to evaluate the use of SCL in building and monitoring command and control applications in order to determine its fitness for space operations. Approximately 24,325 lines of PCG2 code was converted to SCL yielding a 90% reduction in the number of lines of code as many of the functions and scripts utilized in SCL could be ported and reused. Automated standalone testing, simulating the actual production environment, was performed in order to generalize and gauge the relative time it takes for SCL to update and write a given display. The use of SCL rules, functions, and scripts allowed the creation of several test cases permitting the detection of the amount of time it takes update a given set of measurements given the change in a globally existing CUI or CUI. It took the SCL system an average 926.09 ticks to update the entire display of 323 measurements.

  2. On a phase diagram for random neural networks with embedded spike timing dependent plasticity.

    PubMed

    Turova, Tatyana S; Villa, Alessandro E P

    2007-01-01

    This paper presents an original mathematical framework based on graph theory which is a first attempt to investigate the dynamics of a model of neural networks with embedded spike timing dependent plasticity. The neurons correspond to integrate-and-fire units located at the vertices of a finite subset of 2D lattice. There are two types of vertices, corresponding to the inhibitory and the excitatory neurons. The edges are directed and labelled by the discrete values of the synaptic strength. We assume that there is an initial firing pattern corresponding to a subset of units that generate a spike. The number of activated externally vertices is a small fraction of the entire network. The model presented here describes how such pattern propagates throughout the network as a random walk on graph. Several results are compared with computational simulations and new data are presented for identifying critical parameters of the model.

  3. User's manual SIG: a general-purpose signal processing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lager, D.; Azevedo, S.

    1983-10-25

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Many of the basic operations one would perform on digitized data are contained in the core SIG package. Out of these core commands, more powerful signal processing algorithms may be built. Many different operations on time- and frequency-domain signals can be performed by SIG. They include operations on the samples of a signal, such as adding a scalar tomore » each sample, operations on the entire signal such as digital filtering, and operations on two or more signals such as adding two signals. Signals may be simulated, such as a pulse train or a random waveform. Graphics operations display signals and spectra.« less

  4. Effect of Saturation Pressure Difference on Metal-Silicide Nanopowder Formation in Thermal Plasma Fabrication.

    PubMed

    Shigeta, Masaya; Watanabe, Takayuki

    2016-03-07

    A computational investigation using a unique model and a solution algorithm was conducted, changing only the saturation pressure of one material artificially during nanopowder formation in thermal plasma fabrication, to highlight the effects of the saturation pressure difference between a metal and silicon. The model can not only express any profile of particle size-composition distribution for a metal-silicide nanopowder even with widely ranging sizes from sub-nanometers to a few hundred nanometers, but it can also simulate the entire growth process involving binary homogeneous nucleation, binary heterogeneous co-condensation, and coagulation among nanoparticles with different compositions. Greater differences in saturation pressures cause a greater time lag for co-condensation of two material vapors during the collective growth of the metal-silicide nanopowder. The greater time lag for co-condensation results in a wider range of composition of the mature nanopowder.

  5. Filamentation effect in a gas attenuator for high-repetition-rate X-ray FELs.

    PubMed

    Feng, Yiping; Krzywinski, Jacek; Schafer, Donald W; Ortiz, Eliazar; Rowen, Michael; Raubenheimer, Tor O

    2016-01-01

    A sustained filamentation or density depression phenomenon in an argon gas attenuator servicing a high-repetition femtosecond X-ray free-electron laser has been studied using a finite-difference method applied to the thermal diffusion equation for an ideal gas. A steady-state solution was obtained by assuming continuous-wave input of an equivalent time-averaged beam power and that the pressure of the entire gas volume has reached equilibrium. Both radial and axial temperature/density gradients were found and describable as filamentation or density depression previously reported for a femtosecond optical laser of similar attributes. The effect exhibits complex dependence on the input power, the desired attenuation, and the geometries of the beam and the attenuator. Time-dependent simulations were carried out to further elucidate the evolution of the temperature/density gradients in between pulses, from which the actual attenuation received by any given pulse can be properly calculated.

  6. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20xmore » to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.« less

  7. A carbon footprint simulation model for the cork oak sector.

    PubMed

    Demertzi, Martha; Paulo, Joana Amaral; Arroja, Luís; Dias, Ana Cláudia

    2016-10-01

    In the present study, a simulation model for the calculation of the carbon footprint of the cork oak sector (CCFM) is developed for the first time. A life cycle approach is adopted including the forest management, manufacturing, use and end-of-life stages. CCFM allows the user to insert the cork type used as raw material and its respective quantity and the distances in-between the various stages. The user can choose among different end-of-life destination options for the used cork products. The option of inserting different inputs, allows the use of the present simulation model for different cork oak systems, in different countries and with different conditions. CCFM allows the identification of the stages and products with the greatest carbon footprint and thus, a better management of the sector from an environmental perspective. The Portuguese cork oak sector is used as an application example of the model. The results obtained showed that the agglomeration industry is the hotspot for the carbon footprint of the cork sector mainly due to the production of the resins that are mixed with the cork granules for the production of agglomerated cork products. The consideration of the biogenic carbon emissions and sequestration of carbon at the forest in the carbon footprint, resulted to a great decrease of the sector's carbon footprint. Future actions for improvement are suggested in order to decrease the carbon footprint of the entire cork sector. It was found that by decreasing by 10% the emission factor of the agglomeration and transformation industries, substituting the transport trucks by more recent ones and by decreasing by 10% the cork products reaching the landfilling end-of-life destinations (while increasing the quantities reaching incineration and recycling), a decrease of the total CF (excluding the biogenic emissions and sequestration) of the entire cork industry by 10% can be achieved. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Global simulation of formation and evolution of plasmoid and flux-rope in the Earth's Magnetotail

    NASA Astrophysics Data System (ADS)

    Ge, Y.; Raeder, J.; Du, A.

    2014-12-01

    The observation of plasmoids and flux-ropes in the Earth's magnetotail was crucial to establish the simultaneous presence of multiple x-lines in the tail, and has become the basis for the Near Earth Neutral Line (NENL) model of substorms. While the "classical" NENL model envisions x-lines that extend across the entire tail, recent observations have shown that neither do the x-lines and resulting plasmoids encompass the entire tail, nor do the x-lines have to lie along the y-axis. The fragmentation of the tail by spatially and temporally limited x-lines has important consequences for the mass and energy budget of the tail. Recent ARTEMIS observations have shown that the plasmoids in the distant tail are limited in the Y direction and some flux ropes are tilted during their tailward propagation. Understanding their formation and evolution during their propagation through the magnetotail shall shred more light on the general energy and flux transport of the Earth's magnetosphere. In this study we simulate plasmoids and flux-ropes in the Earth's magnetotail using the Open Global Geospace Circulation Model (OpenGGCM). We investigate the generation mechanisms for tail plasmoids and flux-ropes and their evolution as they propagate in the magnetotail. The simulation results show that the limited extend of NENL controls the length or the Y scale of tail plasmoid and flux rope. In addition, by studying their 3D magnetic topology we find that the tilted flux rope forms due to a progressive spreading of reconnection line along the east-west direction, which produces and releases two ends of the flux rope at different times and in different speeds. By constructing a catalogue of observational signatures of plasmoid and flux rope we compare the differences of their signatures and find that large-scale plasmoids have much weaker core fields than that inside the small-scale flux ropes.

  9. Large scale spatially explicit modeling of blue and green water dynamics in a temperate mid-latitude basin

    NASA Astrophysics Data System (ADS)

    Du, Liuying; Rajib, Adnan; Merwade, Venkatesh

    2018-07-01

    Looking only at climate change impacts provides partial information about a changing hydrologic regime. Understanding the spatio-temporal nature of change in hydrologic processes, and the explicit contributions from both climate and land use drivers, holds more practical value for water resources management and policy intervention. This study presents a comprehensive assessment on the spatio-temporal trend of Blue Water (BW) and Green Water (GW) in a 490,000 km2 temperate mid-latitude basin (Ohio River Basin) over the past 80 years (1935-2014), and from thereon, quantifies the combined as well as relative contributions of climate and land use changes. The Soil and Water Assessment Tool (SWAT) is adopted to simulate hydrologic fluxes. Mann-Kendall and Theil-Sen statistical tests are performed on the modeled outputs to detect respectively the trend and magnitude of changes at three different spatial scales - the entire basin, regional level, and sub-basin level. Despite the overall volumetric increase of both BW and GW in the entire basin, changes in their annual average values during the period of simulation reveal a distinctive spatial pattern. GW has increased significantly in the upper and lower parts of the basin, which can be related to the prominent land use change in those areas. BW has increased significantly only in the lower part, likely being associated with the notable precipitation change there. Furthermore, the simulation under a time-varying climate but constant land use scenario identifies climate change in the Ohio River Basin to be influential on BW, while the impact is relatively nominal on GW; whereas, land use change increases GW remarkably, but is counterproductive on BW. The approach to quantify combined/relative effects of climate and land use change as shown in this study can be replicated to understand BW-GW dynamics in similar large basins around the globe.

  10. Simulating a 40-year flood event climatology of Australia with a view to ocean-land teleconnections

    NASA Astrophysics Data System (ADS)

    Schumann, Guy J.-P.; Andreadis, Konstantinos; Stampoulis, Dimitrios; Bates, Paul

    2015-04-01

    We develop, for the first time, a proof-of-concept version for a high-resolution global flood inundation model to generate a flood inundation climatology of the past 40 years (1973-2012) for the entire Australian continent at a native 1 km resolution. The objectives of our study includes (1) deriving an inundation climatology for a continent (Australia) as a demonstrator case to understand the requirements for expanding globally; (2) developing a test bed to assess the potential and value of current and future satellite missions (GRACE, SMAP, ICESat-2, AMSR-2, Sentinels and SWOT) in flood monitoring; and (3) answering science questions such as the linking of inundation to ocean circulation teleconnections. We employ the LISFLOOD-FP hydrodynamic model to generate a flood inundation climatology. The model will be built from freely available SRTM-derived data (channel widths, bank heights and floodplain topography corrected for vegetation canopy using ICESat canopy heights). Lakes and reservoirs are represented and channel hydraulics are resolved using actual channel data with bathymetry inferred from hydraulic geometry. Simulations are run with gauged flows and floodplain inundation climatology are compared to observations from GRACE, flood maps from Landsat, SAR, and MODIS. Simulations have been completed for the entire Australian continent. Additionally, changes in flood inundation have been correlated with indices related to global ocean circulation, such as the El Niño Southern Oscillation index. We will produce data layers on flood event climatology and other derived (default) products from the proposed model including channel and floodplain depths, flow direction, velocity vectors, floodplain water volume, shoreline extent and flooded area. These data layers will be in the form of simple vector and raster formats. Since outputs will be large in size we propose to upload them onto Google Earth under the GEE API license.

  11. Enforcing elemental mass and energy balances for reduced order models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, J.; Agarwal, K.; Sharma, P.

    2012-01-01

    Development of economically feasible gasification and carbon capture, utilization and storage (CCUS) technologies requires a variety of software tools to optimize the designs of not only the key devices involved (e., g., gasifier, CO{sub 2} adsorber) but also the entire power generation system. High-fidelity models such as Computational Fluid Dynamics (CFD) models are capable of accurately simulating the detailed flow dynamics, heat transfer, and chemistry inside the key devices. However, the integration of CFD models within steady-state process simulators, and subsequent optimization of the integrated system, still presents significant challenges due to the scale differences in both time and length,more » as well the high computational cost. A reduced order model (ROM) generated from a high-fidelity model can serve as a bridge between the models of different scales. While high-fidelity models are built upon the principles of mass, momentum, and energy conservations, ROMs are usually developed based on regression-type equations and hence their predictions may violate the mass and energy conservation laws. A high-fidelity model may also have the mass and energy balance problem if it is not tightly converged. Conservations of mass and energy are important when a ROM is integrated to a flowsheet for the process simulation of the entire chemical or power generation system, especially when recycle streams are connected to the modeled device. As a part of the Carbon Capture Simulation Initiative (CCSI) project supported by the U.S. Department of Energy, we developed a software framework for generating ROMs from CFD simulations and integrating them with Process Modeling Environments (PMEs) for system-wide optimization. This paper presents a method to correct the results of a high-fidelity model or a ROM such that the elemental mass and energy are conserved perfectly. Correction factors for the flow rates of individual species in the product streams are solved using a minimization algorithm based on Lagrangian multiplier method. Enthalpies of product streams are also modified to enforce the energy balance. The approach is illustrated for two ROMs, one based on a CFD model of an entrained-flow gasifier and the other based on the CFD model of a multiphase CO{sub 2} adsorber.« less

  12. En route position and time control of aircraft using Kalman filtering of radio aid data

    NASA Technical Reports Server (NTRS)

    Mcgee, L. A.; Christensen, J. V.

    1973-01-01

    Fixed-time-of-arrival (FTA) guidance and navigation is investigated as a possible technique capable of operation within much more stringent en route separation standards and offering significant advantages in safety, higher traffic densities, and improved scheduling reliability, both en route and in the terminal areas. This study investigated the application of FTA guidance previously used in spacecraft guidance. These FTA guidance techniques have been modified and are employed to compute the velocity corrections necessary to return an aircraft to a specified great-circle reference path in order to exercise en route time and position control throughout the entire flight. The necessary position and velocity estimates to accomplish this task are provided by Kalman filtering of data from Loran-C, VORTAC/TACAN, Doppler radar, radio or barometric altitude,and altitude rate. The guidance and navigation system was evaluated using a digital simulation of the cruise phase of supersonic and subsonic flights between San Francisco and New York City, and between New York City and London.

  13. Exclusive queueing model including the choice of service windows

    NASA Astrophysics Data System (ADS)

    Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2018-01-01

    In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.

  14. Investigation of flowfields found in typical combustor geometries

    NASA Technical Reports Server (NTRS)

    Lilley, D. G.

    1985-01-01

    Activities undertaken during the entire course of research are summarized. Studies were concerned with experimental and theoretical research on 2-D axisymmetric geometries under low speed nonreacting, turbulent, swirling flow conditions typical of gas turbine and ramjet combustion chambers. They included recirculation zone characterization, time-mean and turbulence simulation in swirling recirculating flow, sudden and gradual expansion flowfields, and furher complexities and parameter influences. The study included the investigation of: a complete range of swirl strengths; swirler performance; downstream contraction nozzle sizes and locations; expansion ratios; and inlet side-wall angles. Their individual and combined effects on the test section flowfield were observed, measured and characterized. Experimental methods included flow visualization (with smoke and neutrally-buoyant helium-filled soap bubbles), five-hole pitot probe time-mean velocity field measurements, and single-, double-, and triple-wire hot-wire anemometry measurements of time-mean velocities, normal and shear Reynolds sresses. Computational methods included development of the STARPIC code from the primitive-variable TEACH computer code, and its use in flowfield prediction and turbulence model development.

  15. Time warp operating system version 2.7 internals manual

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Time Warp Operating System (TWOS) is an implementation of the Time Warp synchronization method proposed by David Jefferson. In addition, it serves as an actual platform for running discrete event simulations. The code comprising TWOS can be divided into several different sections. TWOS typically relies on an existing operating system to furnish some very basic services. This existing operating system is referred to as the Base OS. The existing operating system varies depending on the hardware TWOS is running on. It is Unix on the Sun workstations, Chrysalis or Mach on the Butterfly, and Mercury on the Mark 3 Hypercube. The base OS could be an entirely new operating system, written to meet the special needs of TWOS, but, to this point, existing systems have been used instead. The base OS's used for TWOS on various platforms are not discussed in detail in this manual, as they are well covered in their own manuals. Appendix G discusses the interface between one such OS, Mach, and TWOS.

  16. Sensitivity vector fields in time-delay coordinate embeddings: theory and experiment.

    PubMed

    Sloboda, A R; Epureanu, B I

    2013-02-01

    Identifying changes in the parameters of a dynamical system can be vital in many diagnostic and sensing applications. Sensitivity vector fields (SVFs) are one way of identifying such parametric variations by quantifying their effects on the morphology of a dynamical system's attractor. In many cases, SVFs are a more effective means of identification than commonly employed modal methods. Previously, it has only been possible to construct SVFs for a given dynamical system when a full set of state variables is available. This severely restricts SVF applicability because it may be cost prohibitive, or even impossible, to measure the entire state in high-dimensional systems. Thus, the focus of this paper is constructing SVFs with only partial knowledge of the state by using time-delay coordinate embeddings. Local models are employed in which the embedded states of a neighborhood are weighted in a way referred to as embedded point cloud averaging. Application of the presented methodology to both simulated and experimental time series demonstrates its utility and reliability.

  17. The Impact of NSCAT Data on Simulating Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Chao, Y.; Cheng, B.; Liu, W.

    1998-01-01

    Wind taken from the National Aeronautics and Space Administration (NASA) scatterometer (NSCAT) is compared with the operational analysis from European Center for Medium-Rnage Forecast (ECMWF) for the entire duration (about 9 months) of the NSCAT mission.

  18. Most robust estimate of the Transient Climate Response yet?

    NASA Astrophysics Data System (ADS)

    Haustein, Karsten; Venema, Victor; Schurer, Andrew

    2017-04-01

    Estimates of the Transient Climate Response often lack a coherent hemispheric or otherwise spatio-temporal representation. In the light of recent work that highlights the importance of inhomogeneous forcing considerations (Shindell et al 2014; Marvel et al 2015) and tas/tos-related inaccuracies (Richardson et al. 2016), here we present results from a well-tested two-box response model that takes these effects carefully into account. All external forcing data are updated based on latest emission estimates as well as recent TSI and volcanic AOD estimates. So are observed GMST data which include data for the entire year of 2016. Hence we also provide one of the first TCR estimates taking the latest El Nino into account. We demonstrate that short-term climate variability is not going to change the TCR estimate beyond very minor fluctuations. The method is therefore shown to be robust within surprisingly small uncertainty estimates. Using PMIP3 and an extended ensemble of HadCM3 simulations (Euro500; Schurer et al. 2014) GCM simulations for the pre-industrial period, we test the fast and slow response time constants that are tailored for observational data (Ripdal 2012). We also test the hemispheric response as well as the response over land and ocean separately. The TCR/ECS ratio is taken from a selected sub-set of CMIP5 simulations. The selection criteria is the best spatiotemporal match over 4 different time periods between 1860 and 2010. We will argue that this procedure should also be standard procedure to estimate ECS from observations, rather than relying on OHC estimates only. Finally, the demonstrate that PMIP3-type simulations that are initialised at least a century before 1850 (as is the standard initialisation for CMIP5-type simulations) are to be preferred. Remaining long-term radiative imbalance due to strong volcanic eruptions (e.g. Gleckler et al. 2006) tend to make CMIP5-type simulations slightly more sensitive to forcing, which leads to detectable stronger warming up until recent day.

  19. Study of magnetic helicity injection in the active region NOAA 9236 producing multiple flare-associated coronal mass ejection events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Sung-Hong; Cho, Kyung-Suk; Bong, Su-Chan

    To better understand a preferred magnetic field configuration and its evolution during coronal mass ejection (CME) events, we investigated the spatial and temporal evolution of photospheric magnetic fields in the active region NOAA 9236 that produced eight flare-associated CMEs during the time period of 2000 November 23-26. The time variations of the total magnetic helicity injection rate and the total unsigned magnetic flux are determined and examined not only in the entire active region but also in some local regions such as the main sunspots and the CME-associated flaring regions using SOHO/MDI magnetogram data. As a result, we found thatmore » (1) in the sunspots, a large amount of positive (right-handed) magnetic helicity was injected during most of the examined time period, (2) in the flare region, there was a continuous injection of negative (left-handed) magnetic helicity during the entire period, accompanied by a large increase of the unsigned magnetic flux, and (3) the flaring regions were mainly composed of emerging bipoles of magnetic fragments in which magnetic field lines have substantially favorable conditions for making reconnection with large-scale, overlying, and oppositely directed magnetic field lines connecting the main sunspots. These observational findings can also be well explained by some MHD numerical simulations for CME initiation (e.g., reconnection-favored emerging flux models). We therefore conclude that reconnection-favored magnetic fields in the flaring emerging flux regions play a crucial role in producing the multiple flare-associated CMEs in NOAA 9236.« less

  20. Towards real-time regional earthquake simulation I: real-time moment tensor monitoring (RMT) for regional events in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi

    2014-01-01

    We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica (http://rmt.earth.sinica.edu.tw). The long-term goal of this system is to provide real-time source information for rapid seismic hazard assessment during large earthquakes.

  1. An unconditionally stable method for numerically solving solar sail spacecraft equations of motion

    NASA Astrophysics Data System (ADS)

    Karwas, Alex

    Solar sails use the endless supply of the Sun's radiation to propel spacecraft through space. The sails use the momentum transfer from the impinging solar radiation to provide thrust to the spacecraft while expending zero fuel. Recently, the first solar sail spacecraft, or sailcraft, named IKAROS completed a successful mission to Venus and proved the concept of solar sail propulsion. Sailcraft experimental data is difficult to gather due to the large expenses of space travel, therefore, a reliable and accurate computational method is needed to make the process more efficient. Presented in this document is a new approach to simulating solar sail spacecraft trajectories. The new method provides unconditionally stable numerical solutions for trajectory propagation and includes an improved physical description over other methods. The unconditional stability of the new method means that a unique numerical solution is always determined. The improved physical description of the trajectory provides a numerical solution and time derivatives that are continuous throughout the entire trajectory. The error of the continuous numerical solution is also known for the entire trajectory. Optimal control for maximizing thrust is also provided within the framework of the new method. Verification of the new approach is presented through a mathematical description and through numerical simulations. The mathematical description provides details of the sailcraft equations of motion, the numerical method used to solve the equations, and the formulation for implementing the equations of motion into the numerical solver. Previous work in the field is summarized to show that the new approach can act as a replacement to previous trajectory propagation methods. A code was developed to perform the simulations and it is also described in this document. Results of the simulations are compared to the flight data from the IKAROS mission. Comparison of the two sets of data show that the new approach is capable of accurately simulating sailcraft motion. Sailcraft and spacecraft simulations are compared to flight data and to other numerical solution techniques. The new formulation shows an increase in accuracy over a widely used trajectory propagation technique. Simulations for two-dimensional, three-dimensional, and variable attitude trajectories are presented to show the multiple capabilities of the new technique. An element of optimal control is also part of the new technique. An additional equation is added to the sailcraft equations of motion that maximizes thrust in a specific direction. A technical description and results of an example optimization problem are presented. The spacecraft attitude dynamics equations take the simulation a step further by providing control torques using the angular rate and acceleration outputs of the numerical formulation.

  2. UOE Pipe Manufacturing Process Simulation: Equipment Designing and Construction

    NASA Astrophysics Data System (ADS)

    Delistoian, Dmitri; Chirchor, Mihael

    2017-12-01

    UOE pipe manufacturing process influence directly on pipeline resilience and operation capacity. At present most spreaded pipe manufacturing method is UOE. This method is based on cold forming. After each technological step appears a certain stress and strain level. For pipe stress strain study is designed and constructed special equipment that simulate entire technological process.UOE pipe equipment is dedicated for manufacturing of longitudinally submerged arc welded DN 400 (16 inch) steel pipe.

  3. Space telescope neutral buoyancy simulations: The first two years

    NASA Technical Reports Server (NTRS)

    Sanders, F. G.

    1982-01-01

    Neutral Buoyancy simulations which were conducted to validate the crew systems interface as it relates to space telescope on-orbit maintenance and contingency operations is discussed. The initial concept validation tests using low fidelity mockups is described. The entire spectrum of proposed space telescope refurbishment and selected contingencies using upgraded mockups which reflect flight hardware are reported. Findings which may be applicable to future efforts of a similar nature are presented.

  4. New cellular automaton designed to simulate geometration in gel electrophoresis

    NASA Astrophysics Data System (ADS)

    Krawczyk, M. J.; Kułakowski, K.; Maksymowicz, A. Z.

    2002-08-01

    We propose a new kind of cellular automaton to simulate transportation of molecules of DNA through agarose gel. Two processes are taken into account: reptation at strong electric field E, described in the particle model, and geometration, i.e. subsequent hookings and releases of long molecules at and from gel fibres. The automaton rules are deterministic and they are designed to describe both processes within one unified approach. Thermal fluctuations are not taken into account. The number of simultaneous hookings is limited by the molecule length. The features of the automaton are: (i) the size of the cell neighbourhood for the automaton rule varies dynamically, from nearest neighbors to the entire molecule; (ii) the length of the time step is determined at each step according to dynamic rules. Calculations are made up to N=244 reptons in a molecule. Two subsequent stages of the motion are found. Firstly, an initial set of random configurations of molecules is transformed into a more ordered phase, where most molecules are elongated along the applied field direction. After some transient time, the mobility μ reaches a constant value. Then, it varies with N as 1/ N for long molecules. The band dispersion varies with time t approximately as Nt1/2. Our results indicate that the well-known plateau of the mobility μ vs. N does not hold at large electric fields.

  5. Lesion size estimator of cardiac radiofrequency ablation at different common locations with different tip temperatures.

    PubMed

    Lai, Yu-Chi; Choy, Young Bin; Haemmerich, Dieter; Vorperian, Vicken R; Webster, John G

    2004-10-01

    Finite element method (FEM) analysis has become a common method to analyze the lesion formation during temperature-controlled radiofrequency (RF) cardiac ablation. We present a process of FEM modeling a system including blood, myocardium, and an ablation catheter with a thermistor embedded at the tip. The simulation used a simple proportional-integral (PI) controller to control the entire process operated in temperature-controlled mode. Several factors affect the lesion size such as target temperature, blood flow rate, and application time. We simulated the time response of RF ablation at different locations by using different target temperatures. The applied sites were divided into two groups each with a different convective heat transfer coefficient. The first group was high-flow such as the atrioventricular (AV) node and the atrial aspect of the AV annulus, and the other was low-flow such as beneath the valve or inside the coronary sinus. Results showed the change of lesion depth and lesion width with time, under different conditions. We collected data for all conditions and used it to create a database. We implemented a user-interface, the lesion size estimator, where the user enters set temperature and location. Based on the database, the software estimated lesion dimensions during different applied durations. This software could be used as a first-step predictor to help the electrophysiologist choose treatment parameters.

  6. Thin Shell Model for NIF capsule stagnation studies

    NASA Astrophysics Data System (ADS)

    Hammer, J. H.; Buchoff, M.; Brandon, S.; Field, J. E.; Gaffney, J.; Kritcher, A.; Nora, R. C.; Peterson, J. L.; Spears, B.; Springer, P. T.

    2015-11-01

    We adapt the thin shell model of Ott et al. to asymmetric ICF capsule implosions on NIF. Through much of an implosion, the shell aspect ratio is large so the thin shell approximation is well satisfied. Asymmetric pressure drive is applied using an analytic form for ablation pressure as a function of the x-ray flux, as well as time-dependent 3D drive asymmetry from hohlraum calculations. Since deviations from a sphere are small through peak velocity, we linearize the equations, decompose them by spherical harmonics and solve ODE's for the coefficients. The model gives the shell position, velocity and areal mass variations at the time of peak velocity, near 250 microns radius. The variables are used to initialize 3D rad-hydro calculations with the HYDRA and ARES codes. At link time the cold fuel shell and ablator are each characterized by a density, adiabat and mass. The thickness, position and velocity of each point are taken from the thin shell model. The interior of the shell is filled with a uniform gas density and temperature consistent with the 3/2PV energy found from 1D rad-hydro calculations. 3D linked simulations compare favorably with integrated simulations of the entire implosion. Through generating synthetic diagnostic data, the model offers a method for quickly testing hypothetical sources of asymmetry and comparing with experiment. Prepared by LLNL under Contract DE-AC52-07NA27344.

  7. Microfluidic droplet-based liquid-liquid extraction.

    PubMed

    Mary, Pascaline; Studer, Vincent; Tabeling, Patrick

    2008-04-15

    We study microfluidic systems in which mass exchanges take place between moving water droplets, formed on-chip, and an external phase (octanol). Here, no chemical reaction takes place, and the mass exchanges are driven by a contrast in chemical potential between the dispersed and continuous phases. We analyze the case where the microfluidic droplets, occupying the entire width of the channel, extract a solute-fluorescein-from the external phase (extraction) and the opposite case, where droplets reject a solute-rhodamine-into the external phase (purification). Four flow configurations are investigated, based on straight or zigzag microchannels. Additionally to the experimental work, we performed two-dimensional numerical simulations. In the experiments, we analyze the influence of different parameters on the process (channel dimensions, fluid viscosities, flow rates, drop size, droplet spacing, ...). Several regimes are singled out. In agreement with the mass transfer theory of Young et al. (Young, W.; Pumir, A.; Pomeau, Y. Phys. Fluids A 1989, 1, 462), we find that, after a short transient, the amount of matter transferred across the droplet interface grows as the square root of time and the time it takes for the transfer process to be completed decreases as Pe-2/3, where Pe is the Peclet number based on droplet velocity and radius. The numerical simulation is found in excellent consistency with the experiment. In practice, the transfer time ranges between a fraction and a few seconds, which is much faster than conventional systems.

  8. Towards simulating and quantifying the light-cone EoR 21-cm signal

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Datta, Kanan K.

    2018-02-01

    The light-cone (LC) effect causes the Epoch of Reionization (EoR) 21-cm signal T_b (\\hat{n}, ν ) to evolve significantly along the line-of-sight (LoS) direction ν. In the first part of this paper, we present a method to properly incorporate the LC effect in simulations of the EoR 21-cm signal that includes peculiar velocities. Subsequently, we discuss how to quantify the second-order statistics of the EoR 21-cm signal in the presence of the LC effect. We demonstrate that the 3D power spectrum P(k) fails to quantify the entire information because it assumes the signal to be ergodic and periodic, whereas the LC effect breaks these conditions along the LoS. Considering a LC simulation centred at redshift 8 where the mean neutral fraction drops from 0.65 to 0.35 across the box, we find that P(k) misses out ˜ 40 per cent of the information at the two ends of the 17.41 MHz simulation bandwidth. The multifrequency angular power spectrum (MAPS) C_{ℓ}(ν_1,ν_2) quantifies the statistical properties of T_b (\\hat{n}, ν ) without assuming the signal to be ergodic and periodic along the LoS. We expect this to quantify the entire statistical information of the EoR 21-cm signal. We apply MAPS to our LC simulation and present preliminary results for the EoR 21-cm signal.

  9. 3D Global Fluid Simulations of Turbulence in LAPD

    NASA Astrophysics Data System (ADS)

    Rogers, Barrett; Ricci, Paolo; Li, Bo

    2009-05-01

    We present 3D global fluid simulations of the UCLA upgraded Large Plasma Device (LAPD). This device confines an 18-m-long, cylindrically symmetric plasma with a uniform magnetic field. The plasma in the simulations is generated by density and temperature sources inside the computational domain, and sheath boundary conditions are applied at the ends of the plasma column. In 3D simulations of the entire plasma, we observe strong, rotating intermittent density and temperature fluctuations driven by resistive driftwave turbulence with finite parallel wavenumbers. Analogous simulations carried out in the 2D limit (that is, assuming that the motions are purely interchange-like) display much weaker mode activity driven a Kelvin-Helmholtz instability. The properties and scaling of the turbulence and transport will be discussed.

  10. ROMI-RIP: Rough mill rip-first simulator. Forest Service general technical report (Final)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, R.E.

    1995-07-01

    The ROugh Mill Rip-First Simulator (ROMI-RIP) is a computer software package that simulates the gang-ripping of lumber. ROMI-RIP was designed to closely simulate current machines and industrial practice. This simulator allows the user to perform `what if` analyses on various gang-rip-first rough mill operations with fixed, floating outer blade and all-movable blade arbors. ROMI-RIP accepts cutting bills with up to 300 different part sizes. Plots of processed boards are easily viewed or printed. Detailed summaries of processing steps (number of rips and crosscuts) and yields (single boards or entire board files) can also be viewed of printed. ROMI-RIP requires IBMmore » personal computers with 80286 of higher processors.« less

  11. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  12. The effect of simulated ostracism on physical activity behavior in children.

    PubMed

    Barkley, Jacob E; Salvy, Sarah-Jeanne; Roemmich, James N

    2012-03-01

    To assess the effects of simulated ostracism on children's physical activity behavior, time allocated to sedentary behavior, and liking of physical activity. Nineteen children (11 boys, 8 girls; age 11.7 ± 1.3 years) completed 2 experimental sessions. During each session, children played a virtual ball-toss computer game (Cyberball). In one session, children played Cyberball and experienced ostracism; in the other session, they were exposed to the inclusion/control condition. The order of conditions was randomized. After playing Cyberball, children were taken to a gymnasium where they had free-choice access to physical and sedentary activities for 30 minutes. Children could participate in the activities, in any pattern they chose, for the entire period. Physical activity during the free-choice period was assessed via accelerometery and sedentary time via observation. Finally, children reported their liking for the activity session via a visual analog scale. Children accumulated 22% fewer (P < .01) accelerometer counts and 41% more (P < .04) minutes of sedentary activity in the ostracized condition (8.9(e+4) ± 4.5(e+4) counts, 11.1 ± 9.3 minutes) relative to the included condition (10.8(e+4) ± 4.7(e+4) counts, 7.9 ± 7.9 minutes). Liking (8.8 ± 1.5 cm included, 8.1 ± 1.9 cm ostracized) of the activity sessions was not significantly different (P > .10) between conditions. Simulated ostracism elicits decreased subsequent physical activity participation in children. Ostracism may contribute to children's lack of physical activity.

  13. Two-dimensional time dependent hurricane overwash and erosion modeling at Santa Rosa Island

    USGS Publications Warehouse

    McCall, R.T.; Van Theil de Vries, J. S. M.; Plant, N.G.; Van Dongeren, A. R.; Roelvink, J.A.; Thompson, D.M.; Reniers, A.J.H.M.

    2010-01-01

    A 2DH numerical, model which is capable of computing nearshore circulation and morphodynamics, including dune erosion, breaching and overwash, is used to simulate overwash caused by Hurricane Ivan (2004) on a barrier island. The model is forced using parametric wave and surge time series based on field data and large-scale numerical model results. The model predicted beach face and dune erosion reasonably well as well as the development of washover fans. Furthermore, the model demonstrated considerable quantitative skill (upwards of 66% of variance explained, maximum bias - 0.21 m) in hindcasting the post-storm shape and elevation of the subaerial barrier island when a sheet flow sediment transport limiter was applied. The prediction skill ranged between 0.66 and 0.77 in a series of sensitivity tests in which several hydraulic forcing parameters were varied. The sensitivity studies showed that the variations in the incident wave height and wave period affected the entire simulated island morphology while variations in the surge level gradient between the ocean and back barrier bay affected the amount of deposition on the back barrier and in the back barrier bay. The model sensitivity to the sheet flow sediment transport limiter, which served as a proxy for unknown factors controlling the resistance to erosion, was significantly greater than the sensitivity to the hydraulic forcing parameters. If no limiter was applied the simulated morphological response of the barrier island was an order of magnitude greater than the measured morphological response.

  14. Nonisothermal glass molding for the cost-efficient production of precision freeform optics

    NASA Astrophysics Data System (ADS)

    Vu, Anh-Tuan; Kreilkamp, Holger; Dambon, Olaf; Klocke, Fritz

    2016-07-01

    Glass molding has become a key replication-based technology to satisfy intensively growing demands of complex precision optics in the today's photonic market. However, the state-of-the-art replicative technologies are still limited, mainly due to their insufficiency to meet the requirements of mass production. This paper introduces a newly developed nonisothermal glass molding in which a complex-shaped optic is produced in a very short process cycle. The innovative molding technology promises a cost-efficient production because of increased mold lifetime, less energy consumption, and high throughput from a fast process chain. At the early stage of the process development, the research focuses on an integration of finite element simulation into the process chain to reduce time and labor-intensive cost. By virtue of numerical modeling, defects including chill ripples and glass sticking in the nonisothermal molding process can be predicted and the consequent effects are avoided. In addition, the influences of process parameters and glass preforms on the surface quality, form accuracy, and residual stress are discussed. A series of experiments was carried out to validate the simulation results. The successful modeling, therefore, provides a systematic strategy for glass preform design, mold compensation, and optimization of the process parameters. In conclusion, the integration of simulation into the entire nonisothermal glass molding process chain will significantly increase the manufacturing efficiency as well as reduce the time-to-market for the mass production of complex precision yet low-cost glass optics.

  15. Nonparametric estimation of groundwater residence time distributions: What can environmental tracer data tell us about groundwater residence time?

    NASA Astrophysics Data System (ADS)

    McCallum, James L.; Engdahl, Nicholas B.; Ginn, Timothy R.; Cook, Peter. G.

    2014-03-01

    Residence time distributions (RTDs) have been used extensively for quantifying flow and transport in subsurface hydrology. In geochemical approaches, environmental tracer concentrations are used in conjunction with simple lumped parameter models (LPMs). Conversely, numerical simulation techniques require large amounts of parameterization and estimated RTDs are certainly limited by associated uncertainties. In this study, we apply a nonparametric deconvolution approach to estimate RTDs using environmental tracer concentrations. The model is based only on the assumption that flow is steady enough that the observed concentrations are well approximated by linear superposition of the input concentrations with the RTD; that is, the convolution integral holds. Even with large amounts of environmental tracer concentration data, the entire shape of an RTD remains highly nonunique. However, accurate estimates of mean ages and in some cases prediction of young portions of the RTD may be possible. The most useful type of data was found to be the use of a time series of tritium. This was due to the sharp variations in atmospheric concentrations and a short half-life. Conversely, the use of CFC compounds with smoothly varying atmospheric concentrations was more prone to nonuniqueness. This work highlights the benefits and limitations of using environmental tracer data to estimate whole RTDs with either LPMs or through numerical simulation. However, the ability of the nonparametric approach developed here to correct for mixing biases in mean ages appears promising.

  16. A decoy chain deployment method based on SDN and NFV against penetration attack

    PubMed Central

    Zhao, Qi; Zhang, Chuanhao

    2017-01-01

    Penetration attacks are one of the most serious network security threats. However, existing network defense technologies do not have the ability to entirely block the penetration behavior of intruders. Therefore, the network needs additional defenses. In this paper, a decoy chain deployment (DCD) method based on SDN+NFV is proposed to address this problem. This method considers about the security status of networks, and deploys decoy chains with the resource constraints. DCD changes the attack surface of the network and makes it difficult for intruders to discern the current state of the network. Simulation experiments and analyses show that DCD can effectively resist penetration attacks by increasing the time cost and complexity of a penetration attack. PMID:29216257

  17. A decoy chain deployment method based on SDN and NFV against penetration attack.

    PubMed

    Zhao, Qi; Zhang, Chuanhao; Zhao, Zheng

    2017-01-01

    Penetration attacks are one of the most serious network security threats. However, existing network defense technologies do not have the ability to entirely block the penetration behavior of intruders. Therefore, the network needs additional defenses. In this paper, a decoy chain deployment (DCD) method based on SDN+NFV is proposed to address this problem. This method considers about the security status of networks, and deploys decoy chains with the resource constraints. DCD changes the attack surface of the network and makes it difficult for intruders to discern the current state of the network. Simulation experiments and analyses show that DCD can effectively resist penetration attacks by increasing the time cost and complexity of a penetration attack.

  18. Gender in Science and Engineering Faculties: Demographic Inertia Revisited.

    PubMed

    Thomas, Nicole R; Poole, Daniel J; Herbers, Joan M

    2015-01-01

    The under-representation of women on faculties of science and engineering is ascribed in part to demographic inertia, which is the lag between retirement of current faculty and future hires. The assumption of demographic inertia implies that, given enough time, gender parity will be achieved. We examine that assumption via a semi-Markov model to predict the future faculty, with simulations that predict the convergence demographic state. Our model shows that existing practices that produce gender gaps in recruitment, retention, and career progression preclude eventual gender parity. Further, we examine sensitivity of the convergence state to current gender gaps to show that all sources of disparity across the entire faculty career must be erased to produce parity: we cannot blame demographic inertia.

  19. Deep-Earth reactor: nuclear fission, helium, and the geomagnetic field.

    PubMed

    Hollenbach, D F; Herndon, J M

    2001-09-25

    Geomagnetic field reversals and changes in intensity are understandable from an energy standpoint as natural consequences of intermittent and/or variable nuclear fission chain reactions deep within the Earth. Moreover, deep-Earth production of helium, having (3)He/(4)He ratios within the range observed from deep-mantle sources, is demonstrated to be a consequence of nuclear fission. Numerical simulations of a planetary-scale geo-reactor were made by using the SCALE sequence of codes. The results clearly demonstrate that such a geo-reactor (i) would function as a fast-neutron fuel breeder reactor; (ii) could, under appropriate conditions, operate over the entire period of geologic time; and (iii) would function in such a manner as to yield variable and/or intermittent output power.

  20. Real time wide area radiation surveillance system (REWARD) based on 3d silicon and (CD,ZN)Te for neutron and gamma-ray detection

    NASA Astrophysics Data System (ADS)

    Disch, C.

    2014-09-01

    Mobile surveillance systems are used to find lost radioactive sources and possible nuclear threats in urban areas. The REWARD collaboration [1] aims to develop such a complete radiation monitoring system that can be installed in mobile or stationary setups across a wide area. The scenarios include nuclear terrorism threats, lost radioactive sources, radioactive contamination and nuclear accidents. This paper will show the performance capabilities of the REWARD system in different scnarios. The results include both Monte Carlo simulations as well as neutron and gamma-ray detection performances in terms of efficiency and nuclide identification. The outcomes of several radiation mapping survey with the entire REWARD system will also be presented.

  1. Numeric simulation of bone remodelling patterns after implantation of a cementless straight stem.

    PubMed

    Lerch, Matthias; Windhagen, Henning; Stukenborg-Colsman, Christina M; Kurtz, Agnes; Behrens, Bernd A; Almohallami, Amer; Bouguecha, Anas

    2013-12-01

    For further development of better bone-preserving implants in total hip arthroplasty (THA), we need to look back and analyse established and clinically approved implants to find out what made them successful. Finite element analysis can help do this by simulating periprosthetic bone remodelling under different conditions. Our aim was thus to establish a numerical model of the cementless straight stem for which good long-term results have been obtained. We performed a numeric simulation of a cementless straight stem, which has been successfully used in its unaltered form since 1986/1987. We have 20 years of experience with this THA system and implanted it 555 times in 2012. We performed qualitative and quantitative validation using bone density data derived from a prospective dual-energy X-ray absorptiometry (DEXA) investigation. Bone mass loss converged to 9.25% for the entire femur. No change in bone density was calculated distal to the tip of the prosthesis. Bone mass decreased by 46.2% around the proximal half of the implant and by 7.6% in the diaphysis. The numeric model was in excellent agreement with DEXA data except for the calcar region, where deviation was 67.7%. The higher deviation in the calcar region is possibly a sign of the complex interactions between the titanium coating on the stem and the surrounding bone. We developed a validated numeric model to simulate bone remodelling for different stem-design modifications. We recommend that new THA implants undergo critical numeric simulation before clinical application.

  2. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  3. SU-F-T-370: A Fast Monte Carlo Dose Engine for Gamma Knife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, T; Zhou, L; Li, Y

    2016-06-15

    Purpose: To develop a fast Monte Carlo dose calculation algorithm for Gamma Knife. Methods: To make the simulation more efficient, we implemented the track repeating technique on GPU. We first use EGSnrc to pre-calculate the photon and secondary electron tracks in water from two mono-energy photons of 60Co. The total photon mean free paths for different materials and energies are obtained from NIST. During simulation, each entire photon track was first loaded to shared memory for each block, the incident original photon was then splitted to Nthread sub-photons, each thread transport one sub-photon, the Russian roulette technique was applied formore » scattered and bremsstrahlung photons. The resultant electrons from photon interactions are simulated by repeating the recorded electron tracks. The electron step length is stretched/shrunk proportionally based on the local density and stopping power ratios of the local material. Energy deposition in a voxel is proportional to the fraction of the equivalent step length in that voxel. To evaluate its accuracy, dose deposition in a 300mm*300mm*300mm water phantom is calculated, and compared to EGSnrc results. Results: Both PDD and OAR showed great agreements (within 0.5%) between our dose engine result and the EGSnrc result. It only takes less than 1 min for every simulation, being reduced up to ∼40 times compared to EGSnrc simulations. Conclusion: We have successfully developed a fast Monte Carlo dose engine for Gamma Knife.« less

  4. Multi-level methods and approximating distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less

  5. The Clinical Health Economics System Simulation (CHESS): a teaching tool for systems- and practice-based learning.

    PubMed

    Voss, John D; Nadkarni, Mohan M; Schectman, Joel M

    2005-02-01

    Academic medical centers face barriers to training physicians in systems- and practice-based learning competencies needed to function in the changing health care environment. To address these problems, at the University of Virginia School of Medicine the authors developed the Clinical Health Economics System Simulation (CHESS), a computerized team-based quasi-competitive simulator to teach the principles and practical application of health economics. CHESS simulates treatment costs to patients and society as well as physician reimbursement. It is scenario based with residents grouped into three teams, each team playing CHESS using differing (fee-for-service or capitated) reimbursement models. Teams view scenarios and select from two or three treatment options that are medically justifiable yet have different potential cost implications. CHESS displays physician reimbursement and patient and societal costs for each scenario as well as costs and income summarized across all scenarios extrapolated to a physician's entire patient panel. The learners are asked to explain these findings and may change treatment options and other variables such as panel size and case mix to conduct sensitivity analyses in real time. Evaluations completed in 2003 by 68 (94%) CHESS resident and faculty participants at 19 U.S. residency programs preferred CHESS to a traditional lecture-and-discussion format to learn about medical decision making, physician reimbursement, patient costs, and societal costs. Ninety-eight percent reported increased knowledge of health economics after viewing the simulation. CHESS demonstrates the potential of computer simulation to teach health economics and other key elements of practice- and systems-based competencies.

  6. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling.

    PubMed

    Yang, Y Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin

    2016-03-07

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C-H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.

  7. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling

    NASA Astrophysics Data System (ADS)

    Yang, Y. Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin

    2016-03-01

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.

  8. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. Isaac; Zhang, Jun; Che, Xing

    2016-03-07

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence ofmore » the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ − ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.« less

  9. Water Energy Simulation Toolset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Thuy; Jeffers, Robert

    The Water-Energy Simulation Toolset (WEST) is an interactive simulation model that helps visualize impacts of different stakeholders on water quantity and quality of a watershed. The case study is applied for the Snake River Basin with the fictional name Cutthroat River Basin. There are four groups of stakeholders of interest: hydropower, agriculture, flood control, and environmental protection. Currently, the quality component depicts nitrogen-nitrate contaminant. Users can easily interact with the model by changing certain inputs (climate change, fertilizer inputs, etc.) to observe the change over the entire system. Users can also change certain parameters to test their management policy.

  10. Effects of Simulated Surface Effect Ship Motions on Crew Habitability. Phase II. Volume 1. Summary Report and Comments

    DTIC Science & Technology

    1981-04-01

    one 24-hour exposure to that condition may be regarded as the most complete and unbiased for determining some effects of a type of simulated SES...eliminated entirely. The ability to predict in advance the resultant effects of motion exposure thus seems to depend on the existance of a given...F• 198OF1L-0/I- =•RAI. )81 -• i . _j EFFECT OF.SIMULATED 1 S URFACE EFFECT SHIP J•OTIONS_2 ON CREW HABITABILITY 1PHASE 1J_ "I ,,OLUME 1 iI SUMMARY

  11. An optimization model to agroindustrial sector in antioquia (Colombia, South America)

    NASA Astrophysics Data System (ADS)

    Fernandez, J.

    2015-06-01

    This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.

  12. Biogeography-based combinatorial strategy for efficient autonomous underwater vehicle motion planning and task-time management

    NASA Astrophysics Data System (ADS)

    Zadeh, S. M.; Powers, D. M. W.; Sammut, K.; Yazdani, A. M.

    2016-12-01

    Autonomous Underwater Vehicles (AUVs) are capable of spending long periods of time for carrying out various underwater missions and marine tasks. In this paper, a novel conflict-free motion planning framework is introduced to enhance underwater vehicle's mission performance by completing maximum number of highest priority tasks in a limited time through a large scale waypoint cluttered operating field, and ensuring safe deployment during the mission. The proposed combinatorial route-path planner model takes the advantages of the Biogeography-Based Optimization (BBO) algorithm toward satisfying objectives of both higher-lower level motion planners and guarantees maximization of the mission productivity for a single vehicle operation. The performance of the model is investigated under different scenarios including the particular cost constraints in time-varying operating fields. To show the reliability of the proposed model, performance of each motion planner assessed separately and then statistical analysis is undertaken to evaluate the total performance of the entire model. The simulation results indicate the stability of the contributed model and its feasible application for real experiments.

  13. Stratospheric temperatures and tracer transport in a nudged 4-year middle atmosphere GCM simulation

    NASA Astrophysics Data System (ADS)

    van Aalst, M. K.; Lelieveld, J.; Steil, B.; Brühl, C.; Jöckel, P.; Giorgetta, M. A.; Roelofs, G.-J.

    2005-02-01

    We have performed a 4-year simulation with the Middle Atmosphere General Circulation Model MAECHAM5/MESSy, while slightly nudging the model's meteorology in the free troposphere (below 113 hPa) towards ECMWF analyses. We show that the nudging 5 technique, which leaves the middle atmosphere almost entirely free, enables comparisons with synoptic observations. The model successfully reproduces many specific features of the interannual variability, including details of the Antarctic vortex structure. In the Arctic, the model captures general features of the interannual variability, but falls short in reproducing the timing of sudden stratospheric warmings. A 10 detailed comparison of the nudged model simulations with ECMWF data shows that the model simulates realistic stratospheric temperature distributions and variabilities, including the temperature minima in the Antarctic vortex. Some small (a few K) model biases were also identified, including a summer cold bias at both poles, and a general cold bias in the lower stratosphere, most pronounced in midlatitudes. A comparison 15 of tracer distributions with HALOE observations shows that the model successfully reproduces specific aspects of the instantaneous circulation. The main tracer transport deficiencies occur in the polar lowermost stratosphere. These are related to the tropopause altitude as well as the tracer advection scheme and model resolution. The additional nudging of equatorial zonal winds, forcing the quasi-biennial oscillation, sig20 nificantly improves stratospheric temperatures and tracer distributions.

  14. Converging free energies of binding in cucurbit[7]uril and octa-acid host-guest systems from SAMPL4 using expanded ensemble simulations

    NASA Astrophysics Data System (ADS)

    Monroe, Jacob I.; Shirts, Michael R.

    2014-04-01

    Molecular containers such as cucurbit[7]uril (CB7) and the octa-acid (OA) host are ideal simplified model test systems for optimizing and analyzing methods for computing free energies of binding intended for use with biologically relevant protein-ligand complexes. To this end, we have performed initially blind free energy calculations to determine the free energies of binding for ligands of both the CB7 and OA hosts. A subset of the selected guest molecules were those included in the SAMPL4 prediction challenge. Using expanded ensemble simulations in the dimension of coupling host-guest intermolecular interactions, we are able to show that our estimates in most cases can be demonstrated to fully converge and that the errors in our estimates are due almost entirely to the assigned force field parameters and the choice of environmental conditions used to model experiment. We confirm the convergence through the use of alternative simulation methodologies and thermodynamic pathways, analyzing sampled conformations, and directly observing changes of the free energy with respect to simulation time. Our results demonstrate the benefits of enhanced sampling of multiple local free energy minima made possible by the use of expanded ensemble molecular dynamics and may indicate the presence of significant problems with current transferable force fields for organic molecules when used for calculating binding affinities, especially in non-protein chemistries.

  15. Converging free energies of binding in cucurbit[7]uril and octa-acid host-guest systems from SAMPL4 using expanded ensemble simulations.

    PubMed

    Monroe, Jacob I; Shirts, Michael R

    2014-04-01

    Molecular containers such as cucurbit[7]uril (CB7) and the octa-acid (OA) host are ideal simplified model test systems for optimizing and analyzing methods for computing free energies of binding intended for use with biologically relevant protein-ligand complexes. To this end, we have performed initially blind free energy calculations to determine the free energies of binding for ligands of both the CB7 and OA hosts. A subset of the selected guest molecules were those included in the SAMPL4 prediction challenge. Using expanded ensemble simulations in the dimension of coupling host-guest intermolecular interactions, we are able to show that our estimates in most cases can be demonstrated to fully converge and that the errors in our estimates are due almost entirely to the assigned force field parameters and the choice of environmental conditions used to model experiment. We confirm the convergence through the use of alternative simulation methodologies and thermodynamic pathways, analyzing sampled conformations, and directly observing changes of the free energy with respect to simulation time. Our results demonstrate the benefits of enhanced sampling of multiple local free energy minima made possible by the use of expanded ensemble molecular dynamics and may indicate the presence of significant problems with current transferable force fields for organic molecules when used for calculating binding affinities, especially in non-protein chemistries.

  16. Simulation of a data archival and distribution system at GSFC

    NASA Technical Reports Server (NTRS)

    Bedet, Jean-Jacques; Bodden, Lee; Dwyer, AL; Hariharan, P. C.; Berbert, John; Kobler, Ben; Pease, Phil

    1993-01-01

    A version-0 of a Data Archive and Distribution System (DADS) is being developed at GSFC to support existing and pre-EOS Earth science datasets and test Earth Observing System Data and Information System (EOSDIS) concepts. The performance of DADS is predicted using a discrete event simulation model. The goals of the simulation were to estimate the amount of disk space needed and the time required to fulfill the DADS requirements for ingestion (14 GB/day) and distribution (48 GB/day). The model has demonstrated that 4 mm and 8 mm stackers can play a critical role in improving the performance of the DADS, since it takes, on average, 3 minutes to manually mount/dismount tapes compared to less than a minute with stackers. With two 4 mm stackers and two 8 mm stackers, and a single operator per shift, the DADS requirements can be met within 16 hours using a total of 9 GB of disk space. When the DADS has no stacker, and the DADS depends entirely on operators to handle the distribution tapes, the simulation has shown that the DADS requirements can still be met within 16 hours, but a minimum of 4 operators per shift were required. The compression/decompression of data sets is very CPU intensive, and relatively slow when performed in software, thereby contributing to an increase in the amount of disk space needed.

  17. Laboratory-scale experiments and numerical modeling of cosolvent flushing of multi-component NAPLs in saturated porous media

    NASA Astrophysics Data System (ADS)

    Agaoglu, Berken; Scheytt, Traugott; Copty, Nadim K.

    2012-10-01

    This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations was also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with low flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. Model simulations over-estimated NAPL recovery for high specific discharges and rate-limited mass transfer, suggesting a constant mass transfer coefficient for the entire flushing experiment may not be valid. When multi-component NAPLs are present, the dissolution rate of individual organic compounds (namely, toluene and benzene) into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values.

  18. The Influence of 150-Cavity Binders on the Dynamics of Influenza A Neuraminidases as Revealed by Molecular Dynamics Simulations and Combined Clustering

    PubMed Central

    Greenway, Kyle T.; LeGresley, Eric B.; Pinto, B. Mario

    2013-01-01

    Neuraminidase inhibitors are the main pharmaceutical agents employed for treatments of influenza infections. The neuraminidase structures typically exhibit a 150-cavity, an exposed pocket that is adjacent to the catalytic site. This site offers promising additional contact points for improving potency of existing pharmaceuticals, as well as generating entirely new candidate inhibitors. Several inhibitors based on known compounds and designed to interact with 150-cavity residues have been reported. However, the dynamics of any of these inhibitors remains unstudied and their viability remains unknown. This work reports the outcome of long-term, all-atom molecular dynamics simulations of four such inhibitors, along with three standard inhibitors for comparison. Each is studied in complex with four representative neuraminidase structures, which are also simulated in the absence of ligands for comparison, resulting in a total simulation time of 9.6µs. Our results demonstrate that standard inhibitors characteristically reduce the mobility of these dynamic proteins, while the 150-binders do not, instead giving rise to many unique conformations. We further describe an improved RMSD-based clustering technique that isolates these conformations – the structures of which are provided to facilitate future molecular docking studies – and reveals their interdependence. We find that this approach confers many advantages over previously described techniques, and the implications for rational drug design are discussed. PMID:23544106

  19. The influence of 150-cavity binders on the dynamics of influenza A neuraminidases as revealed by molecular dynamics simulations and combined clustering.

    PubMed

    Greenway, Kyle T; LeGresley, Eric B; Pinto, B Mario

    2013-01-01

    Neuraminidase inhibitors are the main pharmaceutical agents employed for treatments of influenza infections. The neuraminidase structures typically exhibit a 150-cavity, an exposed pocket that is adjacent to the catalytic site. This site offers promising additional contact points for improving potency of existing pharmaceuticals, as well as generating entirely new candidate inhibitors. Several inhibitors based on known compounds and designed to interact with 150-cavity residues have been reported. However, the dynamics of any of these inhibitors remains unstudied and their viability remains unknown. This work reports the outcome of long-term, all-atom molecular dynamics simulations of four such inhibitors, along with three standard inhibitors for comparison. Each is studied in complex with four representative neuraminidase structures, which are also simulated in the absence of ligands for comparison, resulting in a total simulation time of 9.6 µs. Our results demonstrate that standard inhibitors characteristically reduce the mobility of these dynamic proteins, while the 150-binders do not, instead giving rise to many unique conformations. We further describe an improved RMSD-based clustering technique that isolates these conformations--the structures of which are provided to facilitate future molecular docking studies--and reveals their interdependence. We find that this approach confers many advantages over previously described techniques, and the implications for rational drug design are discussed.

  20. Emergence of resonant mode-locking via delayed feedback in quantum dot semiconductor lasers.

    PubMed

    Tykalewicz, B; Goulding, D; Hegarty, S P; Huyet, G; Erneux, T; Kelleher, B; Viktorov, E A

    2016-02-22

    With conventional semiconductor lasers undergoing external optical feedback, a chaotic output is typically observed even for moderate levels of the feedback strength. In this paper we examine single mode quantum dot lasers under strong optical feedback conditions and show that an entirely new dynamical regime is found consisting of spontaneous mode-locking via a resonance between the relaxation oscillation frequency and the external cavity repetition rate. Experimental observations are supported by detailed numerical simulations of rate equations appropriate for this laser type. The phenomenon constitutes an entirely new mode-locking mechanism in semiconductor lasers.

  1. TASS Model Application for Testing the TDWAP Model

    NASA Technical Reports Server (NTRS)

    Switzer, George F.

    2009-01-01

    One of the operational modes of the Terminal Area Simulation System (TASS) model simulates the three-dimensional interaction of wake vortices within turbulent domains in the presence of thermal stratification. The model allows the investigation of turbulence and stratification on vortex transport and decay. The model simulations for this work all assumed fully-periodic boundary conditions to remove the effects from any surface interaction. During the Base Period of this contract, NWRA completed generation of these datasets but only presented analysis for the neutral stratification runs of that set (Task 3.4.1). Phase 1 work began with the analysis of the remaining stratification datasets, and in the analysis we discovered discrepancies with the vortex time to link predictions. This finding necessitated investigating the source of the anomaly, and we found a problem with the background turbulence. Using the most up to date version TASS with some important defect fixes, we regenerated a larger turbulence domain, and verified the vortex time to link with a few cases before proceeding to regenerate the entire 25 case set (Task 3.4.2). The effort of Phase 2 (Task 3.4.3) concentrated on analysis of several scenarios investigating the effects of closely spaced aircraft. The objective was to quantify the minimum aircraft separations necessary to avoid vortex interactions between neighboring aircraft. The results consist of spreadsheets of wake data and presentation figures prepared for NASA technical exchanges. For these formation cases, NASA carried out the actual TASS simulations and NWRA performed the analysis of the results by making animations, line plots, and other presentation figures. This report contains the description of the work performed during this final phase of the contract, the analysis procedures adopted, and sample plots of the results from the analysis performed.

  2. Coupled hydro-meteorological modelling on a HPC platform for high-resolution extreme weather impact study

    NASA Astrophysics Data System (ADS)

    Zhu, Dehua; Echendu, Shirley; Xuan, Yunqing; Webster, Mike; Cluckie, Ian

    2016-11-01

    Impact-focused studies of extreme weather require coupling of accurate simulations of weather and climate systems and impact-measuring hydrological models which themselves demand larger computer resources. In this paper, we present a preliminary analysis of a high-performance computing (HPC)-based hydrological modelling approach, which is aimed at utilizing and maximizing HPC power resources, to support the study on extreme weather impact due to climate change. Here, four case studies are presented through implementation on the HPC Wales platform of the UK mesoscale meteorological Unified Model (UM) with high-resolution simulation suite UKV, alongside a Linux-based hydrological model, Hydrological Predictions for the Environment (HYPE). The results of this study suggest that the coupled hydro-meteorological model was still able to capture the major flood peaks, compared with the conventional gauge- or radar-driving forecast, but with the added value of much extended forecast lead time. The high-resolution rainfall estimation produced by the UKV performs similarly to that of radar rainfall products in the first 2-3 days of tested flood events, but the uncertainties particularly increased as the forecast horizon goes beyond 3 days. This study takes a step forward to identify how the online mode approach can be used, where both numerical weather prediction and the hydrological model are executed, either simultaneously or on the same hardware infrastructures, so that more effective interaction and communication can be achieved and maintained between the models. But the concluding comments are that running the entire system on a reasonably powerful HPC platform does not yet allow for real-time simulations, even without the most complex and demanding data simulation part.

  3. Comparison and evaluation of model structures for the simulation of pollution fluxes in a tile-drained river basin.

    PubMed

    Hoang, Linh; van Griensven, Ann; van der Keur, Peter; Refsgaard, Jens Christian; Troldborg, Lars; Nilsson, Bertel; Mynett, Arthur

    2014-01-01

    The European Union Water Framework Directive requires an integrated pollution prevention plan at the river basin level. Hydrological river basin modeling tools are therefore promising tools to support the quantification of pollution originating from different sources. A limited number of studies have reported on the use of these models to predict pollution fluxes in tile-drained basins. This study focused on evaluating different modeling tools and modeling concepts to quantify the flow and nitrate fluxes in the Odense River basin using DAISY-MIKE SHE (DMS) and the Soil and Water Assessment Tool (SWAT). The results show that SWAT accurately predicted flow for daily and monthly time steps, whereas simulation of nitrate fluxes were more accurate at a monthly time step. In comparison to the DMS model, which takes into account the uncertainty of soil hydraulic and slurry parameters, SWAT results for flow and nitrate fit well within the range of DMS simulated values in high-flow periods but were slightly lower in low-flow periods. Despite the similarities of simulated flow and nitrate fluxes at the basin outlet, the two models predicted very different separations into flow components (overland flow, tile drainage, and groundwater flow) as well as nitrate fluxes from flow components. It was concluded that the assessment on which the model provides a better representation of the reality in terms of flow paths should not only be based on standard statistical metrics for the entire river basin but also needs to consider additional data, field experiments, and opinions of field experts. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  4. Using a whole farm model to determine the impacts of mating management on the profitability of pasture-based dairy farms.

    PubMed

    Beukes, P C; Burke, C R; Levy, G; Tiddy, R M

    2010-08-01

    An approach to assessing likely impacts of altering reproductive performance on productivity and profitability in pasture-based dairy farms is described. The basis is the development of a whole farm model (WFM) that simulates the entire farm system and holistically links multiple physical performance factors to profitability. The WFM consists of a framework that links a mechanistic cow model, a pasture model, a crop model, management policies and climate. It simulates individual cows and paddocks, and runs on a day time-step. The WFM was upgraded to include reproductive modeling capability using reference tables and empirical equations describing published relationships between cow factors, physiology and mating management. It predicts reproductive status at any time point for individual cows within a modeled herd. The performance of six commercial pasture-based dairy farms was simulated for the period of 12 months beginning 1 June 2005 (05/06 year) to evaluate the accuracy of the model by comparison with actual outcomes. The model predicted most key performance indicators within an acceptable range of error (residual<10% of observed). The evaluated WFM was then used for the six farms to estimate the profitability of changes in farm "set-up" (farm conditions at the start of the farming year on 1 June) and mating management from 05/06 to 06/07 year. Among the six farms simulated, the 4-week calving rate emerged as an important set-up factor influencing profitability, while reproductive performance during natural bull mating was identified as an area with the greatest opportunity for improvement. The WFM presents utility to explore alternative management strategies to predict likely outcomes to proposed changes to a pasture-based farm system. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  5. A satellite and model based flood inundation climatology of Australia

    NASA Astrophysics Data System (ADS)

    Schumann, G.; Andreadis, K.; Castillo, C. J.

    2013-12-01

    To date there is no coherent and consistent database on observed or simulated flood event inundation and magnitude at large scales (continental to global). The only compiled data set showing a consistent history of flood inundation area and extent at a near global scale is provided by the MODIS-based Dartmouth Flood Observatory. However, MODIS satellite imagery is only available from 2000 and is hampered by a number of issues associated with flood mapping using optical images (e.g. classification algorithms, cloud cover, vegetation). Here, we present for the first time a proof-of-concept study in which we employ a computationally efficient 2-D hydrodynamic model (LISFLOOD-FP) complemented with a sub-grid channel formulation to generate a complete flood inundation climatology of the past 40 years (1973-2012) for the entire Australian continent. The model was built completely from freely available SRTM-derived data, including channel widths, bank heights and floodplain topography, which was corrected for vegetation canopy height using a global ICESat canopy dataset. Channel hydraulics were resolved using actual channel data and bathymetry was estimated within the model using hydraulic geometry. On the floodplain, the model simulated the flow paths and inundation variables at a 1 km resolution. The developed model was run over a period of 40 years and a floodplain inundation climatology was generated and compared to satellite flood event observations. Our proof-of-concept study demonstrates that this type of model can reliably simulate past flood events with reasonable accuracies both in time and space. The Australian model was forced with both observed flow climatology and VIC-simulated flows in order to assess the feasibility of a model-based flood inundation climatology at the global scale.

  6. Inference from habitat-selection analysis depends on foraging strategies.

    PubMed

    Bastille-Rousseau, Guillaume; Fortin, Daniel; Dussault, Christian

    2010-11-01

    1. Several methods have been developed to assess habitat selection, most of which are based on a comparison between habitat attributes in used vs. unused or random locations, such as the popular resource selection functions (RSFs). Spatial evaluation of residency time has been recently proposed as a promising avenue for studying habitat selection. Residency-time analyses assume a positive relationship between residency time within habitat patches and selection. We demonstrate that RSF and residency-time analyses provide different information about the process of habitat selection. Further, we show how the consideration of switching rate between habitat patches (interpatch movements) together with residency-time analysis can reveal habitat-selection strategies. 2. Spatially explicit, individual-based modelling was used to simulate foragers displaying one of six foraging strategies in a heterogeneous environment. The strategies combined one of three patch-departure rules (fixed-quitting-harvest-rate, fixed-time and fixed-amount strategy), together with one of two interpatch-movement rules (random or biased). Habitat selection of simulated foragers was then assessed using RSF, residency-time and interpatch-movement analyses. 3. Our simulations showed that RSFs and residency times are not always equivalent. When foragers move in a non-random manner and do not increase residency time in richer patches, residency-time analysis can provide misleading assessments of habitat selection. This is because the overall time spent in the various patch types not only depends on residency times, but also on interpatch-movement decisions. 4. We suggest that RSFs provide the outcome of the entire selection process, whereas residency-time and interpatch-movement analyses can be used in combination to reveal the mechanisms behind the selection process. 5. We showed that there is a risk in using residency-time analysis alone to infer habitat selection. Residency-time analyses, however, may enlighten the mechanisms of habitat selection by revealing central components of resource-use strategies. Given that management decisions are often based on resource-selection analyses, the evaluation of resource-use strategies can be key information for the development of efficient habitat-management strategies. Combining RSF, residency-time and interpatch-movement analyses is a simple and efficient way to gain a more comprehensive understanding of habitat selection. © 2010 The Authors. Journal compilation © 2010 British Ecological Society.

  7. A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1991-01-01

    In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.

  8. Extraction of temporal information in functional MRI

    NASA Astrophysics Data System (ADS)

    Singh, M.; Sungkarat, W.; Jeong, Jeong-Won; Zhou, Yongxia

    2002-10-01

    The temporal resolution of functional MRI (fMRI) is limited by the shape of the haemodynamic response function (hrf) and the vascular architecture underlying the activated regions. Typically, the temporal resolution of fMRI is on the order of 1 s. We have developed a new data processing approach to extract temporal information on a pixel-by-pixel basis at the level of 100 ms from fMRI data. Instead of correlating or fitting the time-course of each pixel to a single reference function, which is the common practice in fMRI, we correlate each pixel's time-course to a series of reference functions that are shifted with respect to each other by 100 ms. The reference function yielding the highest correlation coefficient for a pixel is then used as a time marker for that pixel. A Monte Carlo simulation and experimental study of this approach were performed to estimate the temporal resolution as a function of signal-to-noise ratio (SNR) in the time-course of a pixel. Assuming a known and stationary hrf, the simulation and experimental studies suggest a lower limit in the temporal resolution of approximately 100 ms at an SNR of 3. The multireference function approach was also applied to extract timing information from an event-related motor movement study where the subjects flexed a finger on cue. The event was repeated 19 times with the event's presentation staggered to yield an approximately 100-ms temporal sampling of the haemodynamic response over the entire presentation cycle. The timing differences among different regions of the brain activated by the motor task were clearly visualized and quantified by this method. The results suggest that it is possible to achieve a temporal resolution of /spl sim/200 ms in practice with this approach.

  9. Developpement dune methode de simulation de pompage au sein d'un compresseur multi-etage

    NASA Astrophysics Data System (ADS)

    Dumas, Martial

    Surge is an unsteady phenomenon which appears when a compressor operates at a mass flow that is too low relative to its design point. This aerodynamic instability is characterized by large oscillations in pressure and mass flow, resulting in a sudden drop in power delivered by a gas turbine engine and possibly important damage to engine components. The methodology developed in this thesis allows for the simulations of the flow behavior inside a multi-stage compressor during surge and, by extension, predict at the design phase the time variation of aerodynamic forces on the blades and of the pressure and temperature at bleed locations inside the compressors for turbine cooling. While the compressor is the component of interest and the trigger for surge, the flow behavior during this event is also dependent on other engine components (combustion chamber, turbine, ducts). However, the simulation of the entire gas turbine engine cannot be carried out in a practical manner with existing computational technologies. The approach taken consists of coupling 3-D RANS CFD simulations of the compressor with 1-D equations modeling the behavior of the other components applied as dynamic boundary conditions. The method was put into practice in a commercial RANS CFD code (ANSYS CFX) whose integrated options facilitated the implementation of the 1-D equations into the dynamic boundary conditions of the computational domain. In addition, in order to limit computational time, only one blade passage was simulated per blade row to capture surge which is essentially a one-dimensional phenomenon. This methodology was applied to several compressor geometries with distinct features. Simulations on a low-speed (incompressible) three-stage axial compressor allowed for a validation with experimental data, which showed that the pressure and mass flow oscillations are captured well. This comparison also highlighted the strong dependence of the oscillation frequency on the volume of the downstream plenum (combustion chamber). The simulations of the second compressor demonstrated the adaptability of the approach to a multi-stage compressor with an axial-centrifugal configuration. Finally, application of the method to a transonic compressor geometry from Pratt & Whitney Canada demonstrated the tool on a mixed flow-centrifugal compressor configuration operating in a highly compressible regime. These last simulations highlighted certain limitations of the tool, namely the numerical robustness associated with the use of multiple stator/rotor interfaces in a high-speed compressor with high rates of change of mass flow, and the computational time required to a simulate several surge cycles.

  10. Simulations of precipitation using the Community Earth System Model (CESM): Sensitivity to microphysics time step

    NASA Astrophysics Data System (ADS)

    Murthi, A.; Menon, S.; Sednev, I.

    2011-12-01

    An inherent difficulty in the ability of global climate models to accurately simulate precipitation lies in the use of a large time step, Δt (usually 30 minutes), to solve the governing equations. Since microphysical processes are characterized by small time scales compared to Δt, finite difference approximations used to advance microphysics equations suffer from numerical instability and large time truncation errors. With this in mind, the sensitivity of precipitation simulated by the atmospheric component of CESM, namely the Community Atmosphere Model (CAM 5.1), to the microphysics time step (τ) is investigated. Model integrations are carried out for a period of five years with a spin up time of about six months for a horizontal resolution of 2.5 × 1.9 degrees and 30 levels in the vertical, with Δt = 1800 s. The control simulation with τ = 900 s is compared with one using τ = 300 s for accumulated precipitation and radi- ation budgets at the surface and top of the atmosphere (TOA), while keeping Δt fixed. Our choice of τ = 300 s is motivated by previous work on warm rain processes wherein it was shown that a value of τ around 300 s was necessary, but not sufficient, to ensure positive definiteness and numerical stability of the explicit time integration scheme used to integrate the microphysical equations. However, since the entire suite of microphysical processes are represented in our case, we suspect that this might impose additional restrictions on τ. The τ = 300 s case produces differences in large-scale accumulated rainfall from the τ = 900 s case by as large as 200 mm, over certain regions of the globe. The spatial patterns of total accumulated precipitation using τ = 300 s are in closer agreement with satellite observed precipitation, when compared to the τ = 900 s case. Differences are also seen in the radiation budget with the τ = 300 (900) s cases producing surpluses that range between 1-3 W/m2 at both the TOA and surface in the global means. In order to gain some insight into the possible causes of the observed differences, future work would involve performing additional sensitivity tests using the single column model version of CAM 5.1 to gauge the effect of τ on calculations of source terms and mixing ratios used to calculate precipitation in the budget equations.

  11. Evaluation of Dual Pressurized Rover Operations During Simulated Planetary Surface Exploration

    NASA Technical Reports Server (NTRS)

    Abercromby, Andrew F. J.; Gernhardt, Michael L.

    2010-01-01

    Introduction: A pair of small pressurized rovers (Space Exploration Vehicles, or SEVs) is at the center of the Global Point-of-Departure architecture for future human planetary exploration. Simultaneous operation of multiple crewed surface assets should maximize productive crew time, minimize overhead, and preserve contingency return paths. Methods: A 14-day mission simulation was conducted in the Arizona desert as part of NASA?s 2010 Desert Research and Technology Studies (DRATS). The simulation involved two SEV concept vehicles performing geological exploration under varied operational modes affecting both the extent to which the SEVs must maintain real-time communications with mission control ("Continuous" vs. "Twice-a-Day") and their proximity to each other ("Lead-and-Follow" vs. "Divide-and-Conquer"). As part of a minimalist lunar architecture, no communications relay satellites were assumed. Two-person crews consisting of an astronaut and a field geologist operated each SEV, day and night, throughout the entire 14-day mission, only leaving via the suit ports to perform simulated extravehicular activities. Standard metrics enabled quantification of the habitability and usability of all aspects of the SEV concept vehicles throughout the mission, as well as comparison of the extent to which the operating modes affected crew productivity and performance. Practically significant differences in the relevant metrics were prospectively defined for the testing of all hypotheses. Results and Discussion: Data showed a significant 14% increase in available science time (AST) during Lead-and-Follow mode compared with Divide-and-Conquer, primarily because of the minimal overhead required to maintain communications during Lead-and-Follow. In Lead-and-Follow mode, there was a non-significant 2% increase in AST during Twice-a-Day vs. Continuous communications. Situational awareness of the other vehicle?s location, activities, and contingency return constraints were enhanced during Lead-and-Follow and Twice-a-Day communications modes due to line-of-sight and direct SEV-to-SEV communication. Preliminary analysis of Scientific Data Quality and Observation Quality metrics showed no significant differences between modes.

  12. NRL 1989 Beam Propagation Studies in Support of the ATA Multi-Pulse Propagation Experiment

    DTIC Science & Technology

    1990-08-31

    papers presented here were all written prior to the completion of the experiment. The first of these papers presents simulation results which modeled ...beam stability and channel evolution for an entire five pulse burst. The second paper describes a new air chemistry model used in the SARLAC...Experiment: A new air chemistry model for use in the propagation codes simulating the MPPE was developed by making analytic fits to benchmark runs with

  13. Astronaut Thomas P.Stafford Next to Training Simulator

    NASA Image and Video Library

    1968-09-05

    Stafford standing next to training simulator Note: 1968-L-08723 same image in black and white. General Stafford was commander of Apollo 10 in May 1969, first flight of the lunar module to the moon, he descended to nine miles above the moon performing the entire lunar landing mission except the actual landing. He performed the first rendezvous around the Moon, and designated the first lunar landing site. NASA.gov/https://www.nasa.gov/astronauts/biographies/former

  14. Development of High Fidelity Mobility Simulation of an Autonomous Vehicle in an Off-Road Scenario Using Integrated Sensor, Controller, and Multi-Body Dynamics

    DTIC Science & Technology

    2011-08-04

    AND MULTI-BODY DYNAMICS Jayakumar , Smith, Ross, Jategaonkar, Konarzewski 4 August 2011 UNCLASSIFIED: Distribution Statement A. Approved for public...Autonomous Vehicle in an Off-Road Scenario Using Integrated Sensor, Controller, and Multi-Body Dynamics 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...Cannot neglect vehicle dynamics 4 August 2011 3 UNCLASSIFIED Importance of Simulation Fidelity • Performance evaluation requires entire system

  15. Mapping conduction velocity of early embryonic hearts with a robust fitting algorithm

    PubMed Central

    Gu, Shi; Wang, Yves T; Ma, Pei; Werdich, Andreas A; Rollins, Andrew M; Jenkins, Michael W

    2015-01-01

    Cardiac conduction maturation is an important and integral component of heart development. Optical mapping with voltage-sensitive dyes allows sensitive measurements of electrophysiological signals over the entire heart. However, accurate measurements of conduction velocity during early cardiac development is typically hindered by low signal-to-noise ratio (SNR) measurements of action potentials. Here, we present a novel image processing approach based on least squares optimizations, which enables high-resolution, low-noise conduction velocity mapping of smaller tubular hearts. First, the action potential trace measured at each pixel is fit to a curve consisting of two cumulative normal distribution functions. Then, the activation time at each pixel is determined based on the fit, and the spatial gradient of activation time is determined with a two-dimensional (2D) linear fit over a square-shaped window. The size of the window is adaptively enlarged until the gradients can be determined within a preset precision. Finally, the conduction velocity is calculated based on the activation time gradient, and further corrected for three-dimensional (3D) geometry that can be obtained by optical coherence tomography (OCT). We validated the approach using published activation potential traces based on computer simulations. We further validated the method by adding artificially generated noise to the signal to simulate various SNR conditions using a curved simulated image (digital phantom) that resembles a tubular heart. This method proved to be robust, even at very low SNR conditions (SNR = 2-5). We also established an empirical equation to estimate the maximum conduction velocity that can be accurately measured under different conditions (e.g. sampling rate, SNR, and pixel size). Finally, we demonstrated high-resolution conduction velocity maps of the quail embryonic heart at a looping stage of development. PMID:26114034

  16. Transient responses of phosphoric acid fuel cell power plant system. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Lu, Cheng-Yi

    1983-01-01

    An analytical and computerized study of the steady state and transient response of a phosphoric acid fuel cell (PAFC) system was completed. Parametric studies and sensitivity analyses of the PAFC system's operation were accomplished. Four non-linear dynamic models of the fuel cell stack, reformer, shift converters, and heat exchangers were developed based on nonhomogeneous non-linear partial differential equations, which include the material, component, energy balance, and electrochemical kinetic features. Due to a lack of experimental data for the dynamic response of the components only the steady state results were compared with data from other sources, indicating reasonably good agreement. A steady state simulation of the entire system was developed using, nonlinear ordinary differential equations. The finite difference method and trial-and-error procedures were used to obtain a solution. Using the model, a PAFC system, that was developed under NASA Grant, NCC3-17, was improved through the optimization of the heat exchanger network. Three types of cooling configurations for cell plates were evaluated to obtain the best current density and temperature distributions. The steady state solutions were used as the initial conditions in the dynamic model. The transient response of a simplified PAFC system, which included all of the major components, subjected to a load change was obtained. Due to the length of the computation time for the transient response calculations, analysis on a real-time computer was not possible. A simulation of the real-time calculations was developed on a batch type computer. The transient response characteristics are needed for the optimization of the design and control of the whole PAFC system. All of the models, procedures and simulations were programmed in Fortran and run on IBM 370 computers at Cleveland State University and the NASA Lewis Research Center.

  17. Modeling of the motion of automobile elastic wheel in real-time for creation of wheeled vehicles motion control electronic systems

    NASA Astrophysics Data System (ADS)

    Balakina, E. V.; Zotov, N. M.; Fedin, A. P.

    2018-02-01

    Modeling of the motion of the elastic wheel of the vehicle in real-time is used in the tasks of constructing different models in the creation of wheeled vehicles motion control electronic systems, in the creation of automobile stand-simulators etc. The accuracy and the reliability of simulation of the parameters of the wheel motion in real-time when rolling with a slip within the given road conditions are determined not only by the choice of the model, but also by the inaccuracy and instability of the numerical calculation. It is established that the inaccuracy and instability of the calculation depend on the size of the step of integration and the numerical method being used. The analysis of these inaccuracy and instability when wheel rolling with a slip was made and recommendations for reducing them were developed. It is established that the total allowable range of steps of integration is 0.001.0.005 s; the strongest instability is manifested in the calculation of the angular and linear accelerations of the wheel; the weakest instability is manifested in the calculation of the translational velocity of the wheel and moving of the center of the wheel; the instability is less at large values of slip angle and on more slippery surfaces. A new method of the average acceleration is suggested, which allows to significantly reduce (up to 100%) the manifesting of instability of the solution in the calculation of all parameters of motion of the elastic wheel for different braking conditions and for the entire range of steps of integration. The results of research can be applied to the selection of control algorithms in vehicles motion control electronic systems and in the testing stand-simulators

  18. The accuracy of seminumerical reionization models in comparison with radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Hutter, Anne

    2018-06-01

    We have developed a modular seminumerical code that computes the time and spatially dependent ionization of neutral hydrogen (H I), neutral (He I), and single-ionized helium (He II) in the intergalactic medium (IGM). The model accounts for recombinations and provides different descriptions for the photoionization rate that are used to calculate the residual H I fraction in ionized regions. We compare different seminumerical reionization schemes to a radiative transfer (RT) simulation. We use the RT simulation as a benchmark, and find that the seminumerical approaches produce similar H II and He II morphologies and power spectra of the H I 21 cm signal throughout reionization. As we do not track partial ionization of He II, the extent of the double-ionized helium (He III) regions is consistently smaller. In contrast to previous comparison projects, the ionizing emissivity in our seminumerical scheme is not adjusted to reproduce the redshift evolution of the RT simulation, but directly derived from the RT simulation spectra. Among schemes that identify the ionized regions by the ratio of the number of ionization and absorption events on different spatial smoothing scales, we find those that mark the entire sphere as ionized when the ionization criterion is fulfilled to result in significantly accelerated reionization compared to the RT simulation. Conversely, those that flag only the central cell as ionized yield very similar but slightly delayed redshift evolution of reionization, with up to 20 per cent ionizing photons lost. Despite the overall agreement with the RT simulation, our results suggest that constraining ionizing emissivity-sensitive parameters from seminumerical galaxy formation-reionization models are subject to photon nonconservation.

  19. Hybrid Multiscale Simulation of Hydrologic and Biogeochemical Processes in the River-Groundwater Interaction Zone

    NASA Astrophysics Data System (ADS)

    Yang, X.; Scheibe, T. D.; Chen, X.; Hammond, G. E.; Song, X.

    2015-12-01

    The zone in which river water and groundwater mix plays an important role in natural ecosystems as it regulates the mixing of nutrients that control biogeochemical transformations. Subsurface heterogeneity leads to local hotspots of microbial activity that are important to system function yet difficult to resolve computationally. To address this challenge, we are testing a hybrid multiscale approach that couples models at two distinct scales, based on field research at the U. S. Department of Energy's Hanford Site. The region of interest is a 400 x 400 x 20 m macroscale domain that intersects the aquifer and the river and contains a contaminant plume. However, biogeochemical activity is high in a thin zone (mud layer, <1 m thick) immediately adjacent to the river. This microscale domain is highly heterogeneous and requires fine spatial resolution to adequately represent the effects of local mixing on reactions. It is not computationally feasible to resolve the full macroscale domain at the fine resolution needed in the mud layer, and the reaction network needed in the mud layer is much more complex than that needed in the rest of the macroscale domain. Hence, a hybrid multiscale approach is used to efficiently and accurately predict flow and reactive transport at both scales. In our simulations, models at both scales are simulated using the PFLOTRAN code. Multiple microscale simulations in dynamically defined sub-domains (fine resolution, complex reaction network) are executed and coupled with a macroscale simulation over the entire domain (coarse resolution, simpler reaction network). The objectives of the research include: 1) comparing accuracy and computing cost of the hybrid multiscale simulation with a single-scale simulation; 2) identifying hot spots of microbial activity; and 3) defining macroscopic quantities such as fluxes, residence times and effective reaction rates.

  20. The accuracy of semi-numerical reionization models in comparison with radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Hutter, Anne

    2018-03-01

    We have developed a modular semi-numerical code that computes the time and spatially dependent ionization of neutral hydrogen (H I), neutral (He I) and singly ionized helium (He II) in the intergalactic medium (IGM). The model accounts for recombinations and provides different descriptions for the photoionization rate that are used to calculate the residual H I fraction in ionized regions. We compare different semi-numerical reionization schemes to a radiative transfer (RT) simulation. We use the RT simulation as a benchmark, and find that the semi-numerical approaches produce similar H II and He II morphologies and power spectra of the H I 21cm signal throughout reionization. As we do not track partial ionization of He II, the extent of the double ionized helium (He III) regions is consistently smaller. In contrast to previous comparison projects, the ionizing emissivity in our semi-numerical scheme is not adjusted to reproduce the redshift evolution of the RT simulation, but directly derived from the RT simulation spectra. Among schemes that identify the ionized regions by the ratio of the number of ionization and absorption events on different spatial smoothing scales, we find those that mark the entire sphere as ionized when the ionization criterion is fulfilled to result in significantly accelerated reionization compared to the RT simulation. Conversely, those that flag only the central cell as ionized yield very similar but slightly delayed redshift evolution of reionization, with up to 20% ionizing photons lost. Despite the overall agreement with the RT simulation, our results suggests that constraining ionizing emissivity sensitive parameters from semi-numerical galaxy formation-reionization models are subject to photon nonconservation.

  1. Gradient rotating outer volume excitation (GROOVE): A novel method for single-shot two-dimensional outer volume suppression.

    PubMed

    Powell, Nathaniel J; Jang, Albert; Park, Jang-Yeon; Valette, Julien; Garwood, Michael; Marjańska, Małgorzata

    2015-01-01

    To introduce a new outer volume suppression (OVS) technique that uses a single pulse and rotating gradients to accomplish frequency-swept excitation. This new technique, which is called gradient rotating outer volume excitation (GROOVE), produces a circular or elliptical suppression band rather than suppressing the entire outer volume. Theoretical and k-space descriptions of GROOVE are provided. The properties of GROOVE were investigated with simulations, phantom, and human experiments performed using a 4T horizontal bore magnet equipped with a TEM coil. Similar suppression performance was obtained in phantom and human brain using GROOVE with circular and elliptical shapes. Simulations indicate that GROOVE requires less SAR and time than traditional OVS schemes, but traditional schemes provide a sharper transition zone and less residual signal. GROOVE represents a new way of performing OVS in which spins are excited temporally in space on a trajectory that can be tailored to fit the shape of the suppression region. In addition, GROOVE is capable of suppressing tailored regions of space with more flexibility and in a shorter period of time than conventional methods. GROOVE provides a fast, low SAR alternative to conventional OVS methods in some applications (e.g., scalp suppression). © 2014 Wiley Periodicals, Inc.

  2. Measurements of impurity concentrations and transport in the Lithium Tokamak Experiment

    NASA Astrophysics Data System (ADS)

    Boyle, D. P.; Bell, R. E.; Kaita, R.; Lucia, M.; Schmitt, J. C.; Scotti, F.; Kubota, S.; Hansen, C.; Biewer, T. M.; Gray, T. K.

    2016-10-01

    The Lithium Tokamak Experiment (LTX) is a modest-sized spherical tokamak with all-metal plasma facing components (PFCs), uniquely capable of operating with large area solid and/or liquid lithium coatings essentially surrounding the entire plasma. This work presents measurements of core plasma impurity concentrations and transport in LTX. In discharges with solid Li coatings, volume averaged impurity concentrations were low but non-negligible, with 2 - 4 % Li, 0.6 - 2 % C, 0.4 - 0.7 % O, and Zeff < 1.2 . Transport was assessed using the TRANSP, NCLASS, and MIST codes. Collisions with the main H ions dominated the neoclassical impurity transport, and neoclassical transport coefficients calculated with NCLASS were similar across all impurity species and differed no more than a factor of two. However, time-independent simulations with MIST indicated that neoclassical theory did not fully capture the impurity transport and anomalous transport likely played a significant role in determining impurity profiles. Progress on additional analysis, including time-dependent impurity transport simulations and impurity measurements with liquid lithium coatings, and plans for diagnostic upgrades and future experiments in LTX- β will also be presented. This work supported by US DOE contracts DE-AC02-09CH11466 and DE-AC05-00OR22725.

  3. GN and C Design Overview and Flight Test Results from NASA's Max Launch Abort System (MLAS)

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.; Lanzi, Ryamond J.; Ward, Philip R.

    2010-01-01

    The National Aeronautics and Space Administration (NASA) Engineering and Safety Center (NESC) designed, developed and flew the alternative Max Launch Abort System (MLAS) as risk mitigation for the baseline Orion spacecraft launch abort system (LAS) already in development. The NESC was tasked with both formulating a conceptual objective system (OS) design of this alternative MLAS as well as demonstrating this concept with a simulated pad abort flight test. The goal was to obtain sufficient flight test data to assess performance, validate models/tools, and to reduce the design and development risks for a MLAS OS. Less than 2 years after Project start the MLAS simulated pad abort flight test was successfully conducted from Wallops Island on July 8, 2009. The entire flight test duration was 88 seconds during which time multiple staging events were performed and nine separate critically timed parachute deployments occurred as scheduled. Overall, the as-flown flight performance was as predicted prior to launch. This paper provides an overview of the guidance navigation and control (GN&C) technical approaches employed on this rapid prototyping activity. This paper describes the methodology used to design the MLAS flight test vehicle (FTV). Lessons that were learned during this rapid prototyping project are also summarized.

  4. Differential Fault Analysis on CLEFIA with 128, 192, and 256-Bit Keys

    NASA Astrophysics Data System (ADS)

    Takahashi, Junko; Fukunaga, Toshinori

    This paper describes a differential fault analysis (DFA) attack against CLEFIA. The proposed attack can be applied to CLEFIA with all supported keys: 128, 192, and 256-bit keys. DFA is a type of side-channel attack. This attack enables the recovery of secret keys by injecting faults into a secure device during its computation of the cryptographic algorithm and comparing the correct ciphertext with the faulty one. CLEFIA is a 128-bit blockcipher with 128, 192, and 256-bit keys developed by the Sony Corporation in 2007. CLEFIA employs a generalized Feistel structure with four data lines. We developed a new attack method that uses this characteristic structure of the CLEFIA algorithm. On the basis of the proposed attack, only 2 pairs of correct and faulty ciphertexts are needed to retrieve the 128-bit key, and 10.78 pairs on average are needed to retrieve the 192 and 256-bit keys. The proposed attack is more efficient than any previously reported. In order to verify the proposed attack and estimate the calculation time to recover the secret key, we conducted an attack simulation using a PC. The simulation results show that we can obtain each secret key within three minutes on average. This result shows that we can obtain the entire key within a feasible computational time.

  5. Simulation of 20-year deterioration of acrylic IOLs using severe accelerated deterioration tests.

    PubMed

    Kawai, Kenji; Hayakawa, Kenji; Suzuki, Takahiro

    2012-09-20

    To investigate IOL deterioration by conducting severe accelerated deterioration testing of acrylic IOLs. Department of Ophthalmology, Tokai University School of Medicine Methods: Severe accelerated deterioration tests performed on 7 types of acrylic IOLs simulated 20 years of deterioration. IOLs were placed in a screw tube bottle containing ultra-pure water and kept in an oven (100°C) for 115 days. Deterioration was determined based the outer appearance of the IOL in water and under air-dried conditions using an optical microscope. For accelerated deterioration of polymeric material, the elapse of 115 days was considered to be equivalent to 20 years based on the Arrhenius equation. All of the IOLs in the hydrophobic acrylic group except for AU6 showed glistening-like opacity. The entire optical sections of MA60BM and SA60AT became yellowish white in color. Hydrophilic acrylic IOL HP60M showed no opacity at any of the time points examined. Our data based on accelerated testing showed differences in water content to play a major role in transparency. There were differences in opacity among manufacturers. The method we have used for determining the relative time of IOL deterioration might not represent the exact clinical setting, but the appearance of the materials would presumably be very similar to that seen in patients.

  6. The impact of magnetic fields on thermal instability

    NASA Astrophysics Data System (ADS)

    Ji, Suoqing; Peng Oh, S.; McCourt, Michael

    2018-02-01

    Cold (T ˜ 104 K) gas is very commonly found in both galactic and cluster halos. There is no clear consensus on its origin. Such gas could be uplifted from the central galaxy by galactic or AGN winds. Alternatively, it could form in situ by thermal instability. Fragmentation into a multi-phase medium has previously been shown in hydrodynamic simulations to take place once tcool/tff, the ratio of the cooling time to the free-fall time, falls below a threshold value. Here, we use 3D plane-parallel MHD simulations to investigate the influence of magnetic fields. We find that because magnetic tension suppresses buoyant oscillations of condensing gas, it destabilizes all scales below l_A^cool ˜ v_A t_cool, enhancing thermal instability. This effect is surprisingly independent of magnetic field orientation or cooling curve shape, and sets in even at very low magnetic field strengths. Magnetic fields critically modify both the amplitude and morphology of thermal instability, with δρ/ρ∝β-1/2, where β is the ratio of thermal to magnetic pressure. In galactic halos, magnetic fields can render gas throughout the entire halo thermally unstable, and may be an attractive explanation for the ubiquity of cold gas, even in the halos of passive, quenched galaxies.

  7. Ice cover affects the growth of a stream-dwelling fish.

    PubMed

    Watz, Johan; Bergman, Eva; Piccolo, John J; Greenberg, Larry

    2016-05-01

    Protection provided by shelter is important for survival and affects the time and energy budgets of animals. It has been suggested that in fresh waters at high latitudes and altitudes, surface ice during winter functions as overhead cover for fish, reducing the predation risk from terrestrial piscivores. We simulated ice cover by suspending plastic sheeting over five 30-m-long stream sections in a boreal forest stream and examined its effects on the growth and habitat use of brown trout (Salmo trutta) during winter. Trout that spent the winter under the artificial ice cover grew more than those in the control (uncovered) sections. Moreover, tracking of trout tagged with passive integrated transponders showed that in the absence of the artificial ice cover, habitat use during the day was restricted to the stream edges, often under undercut banks, whereas under the simulated ice cover condition, trout used the entire width of the stream. These results indicate that the presence of surface ice cover may improve the energetic status and broaden habitat use of stream fish during winter. It is therefore likely that reductions in the duration and extent of ice cover due to climate change will alter time and energy budgets, with potentially negative effects on fish production.

  8. Release from or through a wax matrix system. I. Basic release properties of the wax matrix system.

    PubMed

    Yonezawa, Y; Ishida, S; Sunada, H

    2001-11-01

    Release properties from a wax matrix tablet was examined. To obtain basic release properties, the wax matrix tablet was prepared from a physical mixture of drug and wax powder (hydrogenated caster oil) at a fixed mixing ratio. Properties of release from the single flat-faced surface or curved side surface of the wax matrix tablet were examined. The applicability of the square-root time law and of Higuchi equations was confirmed. The release rate constant obtained as g/min(1/2) changed with the release direction. However, the release rate constant obtained as g/cm2 x min(1/2) was almost the same. Hence it was suggested that the release property was almost the same and the wax matrix structure was uniform independent of release surface or direction at a fixed mixing ratio. However, these equations could not explain the entire release process. The applicability of a semilogarithmic equation was not as good compared with the square-root time law or Higuchi equation. However, it was revealed that the semilogarithmic equation was available to simulate the entire release process, even though the fit was somewhat poor. Hence it was suggested that the semilogarithmic equation was sufficient to describe the release process. The release rate constant was varied with release direction. However, these release rate constants were expressed by a function of the effective surface area and initial amount, independent of the release direction.

  9. A statistical approach to the life cycle analysis of cumulus clouds selected in a virtual reality environment

    NASA Astrophysics Data System (ADS)

    Heus, Thijs; Jonker, Harm J. J.; van den Akker, Harry E. A.; Griffith, Eric J.; Koutek, Michal; Post, Frits H.

    2009-03-01

    In this study, a new method is developed to investigate the entire life cycle of shallow cumuli in large eddy simulations. Although trained observers have no problem in distinguishing the different life stages of a cloud, this process proves difficult to automate, because cloud-splitting and cloud-merging events complicate the distinction between a single system divided in several cloudy parts and two independent systems that collided. Because the human perception is well equipped to capture and to make sense of these time-dependent three-dimensional features, a combination of automated constraints and human inspection in a three-dimensional virtual reality environment is used to select clouds that are exemplary in their behavior throughout their entire life span. Three specific cases (ARM, BOMEX, and BOMEX without large-scale forcings) are analyzed in this way, and the considerable number of selected clouds warrants reliable statistics of cloud properties conditioned on the phase in their life cycle. The most dominant feature in this statistical life cycle analysis is the pulsating growth that is present throughout the entire lifetime of the cloud, independent of the case and of the large-scale forcings. The pulses are a self-sustained phenomenon, driven by a balance between buoyancy and horizontal convergence of dry air. The convective inhibition just above the cloud base plays a crucial role as a barrier for the cloud to overcome in its infancy stage, and as a buffer region later on, ensuring a steady supply of buoyancy into the cloud.

  10. A comparison of results from two simulators used for studies of astronaut maneuvering units. [with application to Skylab program

    NASA Technical Reports Server (NTRS)

    Stewart, E. C.; Cannaday, R. L.

    1973-01-01

    A comparison of the results from a fixed-base, six-degree-of -freedom simulator and a moving-base, three-degree-of-freedom simulator was made for a close-in, EVA-type maneuvering task in which visual cues of a target spacecraft were used for guidance. The maneuvering unit (the foot-controlled maneuvering unit of Skylab Experiment T020) employed an on-off acceleration command control system operated entirely by the feet. Maneuvers by two test subjects were made for the fixed-base simulator in six and three degrees of freedom and for the moving-base simulator in uncontrolled and controlled, EVA-type visual cue conditions. Comparisons of pilot ratings and 13 different quantitative parameters from the two simulators are made. Different results were obtained from the two simulators, and the effects of limited degrees of freedom and uncontrolled visual cues are discussed.

  11. Simulations of hydrologic response in the Apalachicola-Chattahoochee-Flint River Basin, Southeastern United States

    USGS Publications Warehouse

    LaFontaine, Jacob H.; Jones, L. Elliott; Painter, Jaime A.

    2017-12-29

    A suite of hydrologic models has been developed for the Apalachicola-Chattahoochee-Flint River Basin (ACFB) as part of the National Water Census, a U.S. Geological Survey research program that focuses on developing new water accounting tools and assessing water availability and use at the regional and national scales. Seven hydrologic models were developed using the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, process-based system that simulates the effects of precipitation, temperature, land cover, and water use on basin hydrology. A coarse-resolution PRMS model was developed for the entire ACFB, and six fine-resolution PRMS models were developed for six subbasins of the ACFB. The coarse-resolution model was loosely coupled with a groundwater model to better assess the effects of water use on streamflow in the lower ACFB, a complex geologic setting with karst features. The PRMS coarse-resolution model was used to provide inputs of recharge to the groundwater model, which in turn provide simulations of groundwater flow that were aggregated with PRMS-based simulations of surface runoff and shallow-subsurface flow. Simulations without the effects of water use were developed for each model for at least the calendar years 1982–2012 with longer periods for the Potato Creek subbasin (1942–2012) and the Spring Creek subbasin (1952–2012). Water-use-affected flows were simulated for 2008–12. Water budget simulations showed heterogeneous distributions of precipitation, actual evapotranspiration, recharge, runoff, and storage change across the ACFB. Streamflow volume differences between no-water-use and water-use simulations were largest along the main stem of the Apalachicola and Chattahoochee River Basins, with streamflow percentage differences largest in the upper Chattahoochee and Flint River Basins and Spring Creek in the lower Flint River Basin. Water-use information at a shorter time step and a fully coupled simulation in the lower ACFB may further improve water availability estimates and hydrologic simulations in the basin.

  12. Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Brandt, Achi; Thomas, James L.; Diskin, Boris

    2001-01-01

    Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the basis of this approach. Because the governing equations are a set of coupled nonlinear conservation equations with discontinuities (shocks, slip lines, etc.) and singularities (flow- or grid-induced), the difficulties are many. This paper summarizes recent progress towards the attainment of TME in basic CFD simulations.

  13. Dynamical Sequestration of the Moon-Forming Impactor in Co-Orbital Resonance with Earth

    NASA Astrophysics Data System (ADS)

    Kortenkamp, Stephen J.; Hartmann, William J.

    2015-11-01

    Recent concerns about the giant impact hypothesis for the origin of the moon, and an associated “isotope crisis” are assuaged if the impactor was a local object that formed near Earth and the impact occurred relatively late. We investigated a scenario that may meet these criteria, with the moon-forming impactor originating in 1:1 co-orbital resonance with Earth. Using N-body numerical simulations we explored the dynamical consequences of placing Mars-mass companions in various co-orbital configurations with a proto-Earth having 90% of its current mass. We modeled configurations that include the four terrestrial planets as well as configurations that also include the four giant planets. In both the 4- and 8-planet models we found that a single additional Mars-mass companion typically remains a stable co-orbital of Earth for the entire 250 million year (Myr) duration of our simulations (33 of 34 simulations). In an effort to destabilize such a system we carried out an additional 45 simulations that included a second Mars-mass co-orbital companion. Even with two Mars-mass companions sharing Earth’s orbit most of these models (28) also remained stable for the entire 250 Myr duration of the simulations. Of the 17 two-companion models that eventually became unstable 12 impacts were observed between Earth and an escaping co-orbital companion. The average delay we observed for an impact of a Mars-mass companion with Earth was 101 Myr, and the longest delay was 221 Myr. Several of the stable simulations involved unusual 3-planet co-orbital configurations that could exhibit interesting observational signatures in plantetary transit surveys.

  14. Au nanostructure-decorated TiO2 nanowires exhibiting photoactivity across entire UV-visible region for photoelectrochemical water splitting.

    PubMed

    Pu, Ying-Chih; Wang, Gongming; Chang, Kao-Der; Ling, Yichuan; Lin, Yin-Kai; Fitzmorris, Bob C; Liu, Chia-Ming; Lu, Xihong; Tong, Yexiang; Zhang, Jin Z; Hsu, Yung-Jung; Li, Yat

    2013-08-14

    Here we demonstrate that the photoactivity of Au-decorated TiO2 electrodes for photoelectrochemical water oxidation can be effectively enhanced in the entire UV-visible region from 300 to 800 nm by manipulating the shape of the decorated Au nanostructures. The samples were prepared by carefully depositing Au nanoparticles (NPs), Au nanorods (NRs), and a mixture of Au NPs and NRs on the surface of TiO2 nanowire arrays. As compared with bare TiO2, Au NP-decorated TiO2 nanowire electrodes exhibited significantly enhanced photoactivity in both the UV and visible regions. For Au NR-decorated TiO2 electrodes, the photoactivity enhancement was, however, observed in the visible region only, with the largest photocurrent generation achieved at 710 nm. Significantly, TiO2 nanowires deposited with a mixture of Au NPs and NRs showed enhanced photoactivity in the entire UV-visible region. Monochromatic incident photon-to-electron conversion efficiency measurements indicated that excitation of surface plasmon resonance of Au is responsible for the enhanced photoactivity of Au nanostructure-decorated TiO2 nanowires. Photovoltage experiment showed that the enhanced photoactivity of Au NP-decorated TiO2 in the UV region was attributable to the effective surface passivation of Au NPs. Furthermore, 3D finite-difference time domain simulation was performed to investigate the electrical field amplification at the interface between Au nanostructures and TiO2 upon SPR excitation. The results suggested that the enhanced photoactivity of Au NP-decorated TiO2 in the UV region was partially due to the increased optical absorption of TiO2 associated with SPR electrical field amplification. The current study could provide a new paradigm for designing plasmonic metal/semiconductor composite systems to effectively harvest the entire UV-visible light for solar fuel production.

  15. A Java-Enabled Interactive Graphical Gas Turbine Propulsion System Simulator

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1997-01-01

    This paper describes a gas turbine simulation system which utilizes the newly developed Java language environment software system. The system provides an interactive graphical environment which allows the quick and efficient construction and analysis of arbitrary gas turbine propulsion systems. The simulation system couples a graphical user interface, developed using the Java Abstract Window Toolkit, and a transient, space- averaged, aero-thermodynamic gas turbine analysis method, both entirely coded in the Java language. The combined package provides analytical, graphical and data management tools which allow the user to construct and control engine simulations by manipulating graphical objects on the computer display screen. Distributed simulations, including parallel processing and distributed database access across the Internet and World-Wide Web (WWW), are made possible through services provided by the Java environment.

  16. Model-free estimation of the effective correlation time for C–H bond reorientation in amphiphilic bilayers: {sup 1}H–{sup 13}C solid-state NMR and MD simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira, Tiago Mendes, E-mail: tiago.ferreira@fkem1.lu.se; Physical Chemistry, Lund University, P.O. Box 124, SE-221 00 Lund; Ollila, O. H. Samuli

    2015-01-28

    Molecular dynamics (MD) simulations give atomically detailed information on structure and dynamics in amphiphilic bilayer systems on timescales up to about 1 μs. The reorientational dynamics of the C–H bonds is conventionally verified by measurements of {sup 13}C or {sup 2}H nuclear magnetic resonance (NMR) longitudinal relaxation rates R{sub 1}, which are more sensitive to motional processes with correlation times close to the inverse Larmor frequency, typically around 1-10 ns on standard NMR instrumentation, and are thus less sensitive to the 10-1000 ns timescale motion that can be observed in the MD simulations. We propose an experimental procedure for atomicallymore » resolved model-free estimation of the C–H bond effective reorientational correlation time τ{sub e}, which includes contributions from the entire range of all-atom MD timescales and that can be calculated directly from the MD trajectories. The approach is based on measurements of {sup 13}C R{sub 1} and R{sub 1ρ} relaxation rates, as well as {sup 1}H−{sup 13}C dipolar couplings, and is applicable to anisotropic liquid crystalline lipid or surfactant systems using a conventional solid-state NMR spectrometer and samples with natural isotopic composition. The procedure is demonstrated on a fully hydrated lamellar phase of 1-palmitoyl-2-oleoyl-phosphatidylcholine, yielding values of τ{sub e} from 0.1 ns for the methyl groups in the choline moiety and at the end of the acyl chains to 3 ns for the g{sub 1} methylene group of the glycerol backbone. MD simulations performed with a widely used united-atom force-field reproduce the τ{sub e}-profile of the major part of the acyl chains but underestimate the dynamics of the glycerol backbone and adjacent molecular segments. The measurement of experimental τ{sub e}-profiles can be used to study subtle effects on C–H bond reorientational motions in anisotropic liquid crystals, as well as to validate the C–H bond reorientation dynamics predicted in MD simulations of amphiphilic bilayers such as lipid membranes.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mermigkis, Panagiotis G.; Tsalikis, Dimitrios G.; Institute of Chemical Engineering and High Temperature Chemical Processes, GR 26500 Patras

    A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrixmore » and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, D{sub eff}, of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, D{sub eff} is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for D{sub eff} as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on D{sub eff} (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate times and correlated them with the time needed for penetrant water molecules to explore the available large, fast-diffusing CNT pores before Fickian diffusion is reached.« less

  18. Towards a Computational Framework for Modeling the Impact of Aortic Coarctations Upon Left Ventricular Load

    PubMed Central

    Karabelas, Elias; Gsell, Matthias A. F.; Augustin, Christoph M.; Marx, Laura; Neic, Aurel; Prassl, Anton J.; Goubergrits, Leonid; Kuehne, Titus; Plank, Gernot

    2018-01-01

    Computational fluid dynamics (CFD) models of blood flow in the left ventricle (LV) and aorta are important tools for analyzing the mechanistic links between myocardial deformation and flow patterns. Typically, the use of image-based kinematic CFD models prevails in applications such as predicting the acute response to interventions which alter LV afterload conditions. However, such models are limited in their ability to analyze any impacts upon LV load or key biomarkers known to be implicated in driving remodeling processes as LV function is not accounted for in a mechanistic sense. This study addresses these limitations by reporting on progress made toward a novel electro-mechano-fluidic (EMF) model that represents the entire physics of LV electromechanics (EM) based on first principles. A biophysically detailed finite element (FE) model of LV EM was coupled with a FE-based CFD solver for moving domains using an arbitrary Eulerian-Lagrangian (ALE) formulation. Two clinical cases of patients suffering from aortic coarctations (CoA) were built and parameterized based on clinical data under pre-treatment conditions. For one patient case simulations under post-treatment conditions after geometric repair of CoA by a virtual stenting procedure were compared against pre-treatment results. Numerical stability of the approach was demonstrated by analyzing mesh quality and solver performance under the significantly large deformations of the LV blood pool. Further, computational tractability and compatibility with clinical time scales were investigated by performing strong scaling benchmarks up to 1536 compute cores. The overall cost of the entire workflow for building, fitting and executing EMF simulations was comparable to those reported for image-based kinematic models, suggesting that EMF models show potential of evolving into a viable clinical research tool. PMID:29892227

  19. The Commonality Between Approaches to Determine Jump Fatigue During Basketball Activity in Junior Players: In-Game Versus Across-Game Decrements.

    PubMed

    Scanlan, Aaron T; Fox, Jordan L; Borges, Nattai R; Dalbo, Vincent J

    2017-02-01

    Declines in high-intensity activity during game play (in-game approach) and performance tests measured pre- and postgame (across-game approach) have been used to assess player fatigue in basketball. However, a direct comparison of these approaches is not available. Consequently, this study examined the commonality between in- and across-game jump fatigue during simulated basketball game play. Australian, state-level, junior male basketball players (n = 10; 16.6 ± 1.1 y, 182.4 ± 4.3 cm, 68.3 ± 10.2 kg) completed 4 × 10-min standardized quarters of simulated basketball game play. In-game jump height during game play was measured using video analysis, while across-game jump height was determined pre-, mid-, and postgame play using an in-ground force platform. Jump height was determined using the flight-time method, with jump decrement calculated for each approach across the first half, second half, and entire game. A greater jump decrement was apparent for the in-game approach than for the across-game approach in the first half (37.1% ± 11.6% vs 1.7% ± 6.2%; P = .005; d = 3.81, large), while nonsignificant, large differences were evident between approaches in the second half (d = 1.14) and entire game (d = 1.83). Nonsignificant associations were evident between in-game and across-game jump decrement, with shared variances of 3-26%. Large differences and a low commonality were observed between in- and across-game jump fatigue during basketball game play, suggesting that these approaches measure different constructs. Based on our findings, it is not recommended that basketball coaches use these approaches interchangeably to monitor player fatigue across the season.

  20. Mathematical model of whole-process calculation for bottom-blowing copper smelting

    NASA Astrophysics Data System (ADS)

    Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Li, He-song

    2017-11-01

    The distribution law of materials in smelting products is key to cost accounting and contaminant control. Regardless, the distribution law is difficult to determine quickly and accurately by mere sampling and analysis. Mathematical models for material and heat balance in bottom-blowing smelting, converting, anode furnace refining, and electrolytic refining were established based on the principles of material (element) conservation, energy conservation, and control index constraint in copper bottom-blowing smelting. Simulation of the entire process of bottom-blowing copper smelting was established using a self-developed MetCal software platform. A whole-process simulation for an enterprise in China was then conducted. Results indicated that the quantity and composition information of unknown materials, as well as heat balance information, can be quickly calculated using the model. Comparison of production data revealed that the model can basically reflect the distribution law of the materials in bottom-blowing copper smelting. This finding provides theoretical guidance for mastering the performance of the entire process.

  1. Scripting Module for the Satellite Orbit Analysis Program (SOAP)

    NASA Technical Reports Server (NTRS)

    Carnright, Robert; Paget, Jim; Coggi, John; Stodden, David

    2008-01-01

    This add-on module to the SOAP software can perform changes to simulation objects based on the occurrence of specific conditions. This allows the software to encompass simulation response of scheduled or physical events. Users can manipulate objects in the simulation environment under programmatic control. Inputs to the scripting module are Actions, Conditions, and the Script. Actions are arbitrary modifications to constructs such as Platform Objects (i.e. satellites), Sensor Objects (representing instruments or communication links), or Analysis Objects (user-defined logical or numeric variables). Examples of actions include changes to a satellite orbit ( v), changing a sensor-pointing direction, and the manipulation of a numerical expression. Conditions represent the circumstances under which Actions are performed and can be couched in If-Then-Else logic, like performing v at specific times or adding to the spacecraft power only when it is being illuminated by the Sun. The SOAP script represents the entire set of conditions being considered over a specific time interval. The output of the scripting module is a series of events, which are changes to objects at specific times. As the SOAP simulation clock runs forward, the scheduled events are performed. If the user sets the clock back in time, the events within that interval are automatically undone. This script offers an interface for defining scripts where the user does not have to remember the vocabulary of various keywords. Actions can be captured by employing the same user interface that is used to define the objects themselves. Conditions can be set to invoke Actions by selecting them from pull-down lists. Users define the script by selecting from the pool of defined conditions. Many space systems have to react to arbitrary events that can occur from scheduling or from the environment. For example, an instrument may cease to draw power when the area that it is tasked to observe is not in view. The contingency of the planetary body blocking the line of sight is a condition upon which the power being drawn is set to zero. It remains at zero until the observation objective is again in view. Computing the total power drawn by the instrument over a period of days or weeks can now take such factors into consideration. What makes the architecture especially powerful is that the scripting module can look ahead and behind in simulation time, and this temporal versatility can be leveraged in displays such as x-y plots. For example, a plot of a satellite s altitude as a function of time can take changes to the orbit into account.

  2. Stellar tracking attitude reference system

    NASA Technical Reports Server (NTRS)

    Klestadt, B.

    1974-01-01

    A satellite precision attitude control system was designed, based on the use of STARS as the principal sensing system. The entire system was analyzed and simulated in detail, considering the nonideal properties of the control and sensing components and realistic spacecraft mass properties. Experimental results were used to improve the star tracker noise model. The results of the simulation indicate that STARS performs in general as predicted in a realistic application and should be a strong contender in most precision earth pointing applications.

  3. A simulation of streaming flows associated with acoustic levitators

    NASA Astrophysics Data System (ADS)

    Rednikov, A.; Riley, N.

    2002-04-01

    Steady-state acoustic streaming flow patterns have been observed by Trinh and Robey [Phys. Fluids 6, 3567 (1994)], during the operation of a variety of single axis ultrasonic levitators in a gaseous environment. Microstreaming around levitated samples is superimposed on the streaming flow which is observed in the levitator even in the absence of any particle therein. In this paper, by physical arguments, numerical and analytical simulations we provide entirely satisfactory interpretations of the observed flow patterns in both isothermal and nonisothermal situations.

  4. Multiscale modeling of fluid flow and mass transport

    NASA Astrophysics Data System (ADS)

    Masuoka, K.; Yamamoto, H.; Bijeljic, B.; Lin, Q.; Blunt, M. J.

    2017-12-01

    In recent years, there are some reports on a simulation of fluid flow in pore spaces of rocks using Navier-Stokes equations. These studies mostly adopt a X-ray CT to create 3-D numerical grids of the pores in micro-scale. However, results may be of low accuracy when the rock has a large pore size distribution, because pores, whose size is smaller than resolution of the X-ray CT may be neglected. We recently found out by tracer tests in a laboratory using a brine saturated Ryukyu limestone and inject fresh water that a decrease of chloride concentration took longer time. This phenomenon can be explained due to weak connectivity of the porous networks. Therefore, it is important to simulate entire pore spaces even those of very small sizes in which diffusion is dominant. We have developed a new methodology for multi-level modeling for pore scale fluid flow in porous media. The approach is to combine pore-scale analysis with Darcy-flow analysis using two types of X-ray CT images in different resolutions. Results of the numerical simulations showed a close match with the experimental results. The proposed methodology is an enhancement for analyzing mass transport and flow phenomena in rocks with complicated pore structure.

  5. Simulating Dissolution of Intravitreal Triamcinolone Acetonide Suspensions in an Anatomically Accurate Rabbit Eye Model

    PubMed Central

    Horner, Marc; Muralikrishnan, R.

    2010-01-01

    ABSTRACT Purpose A computational fluid dynamics (CFD) study examined the impact of particle size on dissolution rate and residence of intravitreal suspension depots of Triamcinolone Acetonide (TAC). Methods A model for the rabbit eye was constructed using insights from high-resolution NMR imaging studies (Sawada 2002). The current model was compared to other published simulations in its ability to predict clearance of various intravitreally injected materials. Suspension depots were constructed explicitly rendering individual particles in various configurations: 4 or 16 mg drug confined to a 100 μL spherical depot, or 4 mg exploded to fill the entire vitreous. Particle size was reduced systematically in each configuration. The convective diffusion/dissolution process was simulated using a multiphase model. Results Release rate became independent of particle diameter below a certain value. The size-independent limits occurred for particle diameters ranging from 77 to 428 μM depending upon the depot configuration. Residence time predicted for the spherical depots in the size-independent limit was comparable to that observed in vivo. Conclusions Since the size-independent limit was several-fold greater than the particle size of commercially available pharmaceutical TAC suspensions, differences in particle size amongst such products are predicted to be immaterial to their duration or performance. PMID:20467888

  6. Numerical Simulations of Dynamical Mass Transfer in Binaries

    NASA Astrophysics Data System (ADS)

    Motl, P. M.; Frank, J.; Tohline, J. E.

    1999-05-01

    We will present results from our ongoing research project to simulate dynamically unstable mass transfer in near contact binaries with mass ratios different from one. We employ a fully three-dimensional self-consistent field technique to generate synchronously rotating polytropic binaries. With our self-consistent field code we can create equilibrium binaries where one component is, by radius, within about 99 of filling its Roche lobe for example. These initial configurations are evolved using a three-dimensional, Eulerian hydrodynamics code. We make no assumptions about the symmetry of the subsequent flow and the entire binary system is evolved self-consistently under the influence of its own gravitational potential. For a given mass ratio and polytropic index for the binary components, mass transfer via Roche lobe overflow can be predicted to be stable or unstable through simple theoretical arguments. The validity of the approximations made in the stability calculations are tested against our numerical simulations. We acknowledge support from the U.S. National Science Foundation through grants AST-9720771, AST-9528424, and DGE-9355007. This research has been supported, in part, by grants of high-performance computing time on NPACI facilities at the San Diego Supercomputer Center, the Texas Advanced Computing Center and through the PET program of the NAVOCEANO DoD Major Shared Resource Center in Stennis, MS.

  7. Mesoscale model response to random, surface-based perturbations — A sea-breeze experiment

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.; Pielke, R. A.; Miller, W. F.; Lee, T. J.

    1990-09-01

    The introduction into a mesoscale model of random (in space) variations in roughness length, or random (in space and time) surface perturbations of temperature and friction velocity, produces a measurable, but barely significant, response in the simulated flow dynamics of the lower atmosphere. The perturbations are an attempt to include the effects of sub-grid variability into the ensemble-mean parameterization schemes used in many numerical models. Their magnitude is set in our experiments by appeal to real-world observations of the spatial variations in roughness length and daytime surface temperature over the land on horizontal scales of one to several tens of kilometers. With sea-breeze simulations, comparisons of a number of realizations forced by roughness-length and surface-temperature perturbations with the standard simulation reveal no significant change in ensemble mean statistics, and only small changes in the sea-breeze vertical velocity. Changes in the updraft velocity for individual runs, of up to several cms-1 (compared to a mean of 14 cms-1), are directly the result of prefrontal temperature changes of 0.1 to 0.2K, produced by the random surface forcing. The correlation and magnitude of the changes are entirely consistent with a gravity-current interpretation of the sea breeze.

  8. Large-eddy simulations of a solid-rocket booster jet

    NASA Astrophysics Data System (ADS)

    Paoli, Roberto; Poubeau, Adele; Cariolle, Daniel

    2014-11-01

    Emissions from solid-rocket boosters are responsible for a severe decrease in ozone concentration in the rocket plume during the first hours after a launch. The main source of ozone depletion is due to hydrogen chloride that is converted into chlorine in the high temperature regions of the jet (afterburning). The objective of this study is to evaluate the active chlorine concentration in the plume of a solid-rocket booster using large-eddy simulations. The gas is injected through the entire nozzle of the booster and a local time-stepping method based on coupling multi-instances of a fluid solver is used to extend the computational domain up to 600 nozzle exit diameters. The methodology is validated for a non-reactive case by analyzing the flow characteristics of supersonic co-flowing under expanded jets. Then, the chemistry of chlorine is studied offline using a complex chemistry solver and the LES data extracted from the mean trajectories of sample fluid particles. Finally, the online chemistry is analyzed by means of the multispecies version of the LES solver using a reduced chemistry scheme. The LES are able to capture the mixing of the exhaust with ambient air and the species concentrations, which is also useful to initialize atmospheric simulations on larger domains.

  9. Chemical Memory Reactions Induced Bursting Dynamics in Gene Expression

    PubMed Central

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems. PMID:23349679

  10. Chemical memory reactions induced bursting dynamics in gene expression.

    PubMed

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems.

  11. Micron-scale Reactive Atomistic Simulation of Void Collapse and Hotspot Growth in PETN

    NASA Astrophysics Data System (ADS)

    Thompson, Aidan; Shan, Tzu-Ray; Wixom, Ryan

    2015-06-01

    Material defects and other heterogeneities such as dislocations, micro-porosity, and grain boundaries play key roles in the shock-induced initiation of detonation in energetic materials. We performed non-equilibrium molecular dynamics simulations to explore the effect of nanoscale voids on hotspot growth and initiation in micron-scale pentaerythritol tetranitrate (PETN) crystals under weak shock loading (Up = 1.25 km/s; Us = 4.5 km/s). We used the ReaxFF potential implemented in LAMMPS. We built a pseudo-2D PETN crystal with dimensions 0.3 μm × 0.22 μm × 1.3 nm containing a 20 nm cylindrical void. Once the initial shockwave traversed the entire sample, the shock-front absorbing boundary condition was applied, allowing the simulation to continue beyond 1 nanosecond. Results show an exponentially increasing hotspot growth rate. The hotspot morphology is initially symmetric about the void axis, but strong asymmetry develops at later times, due to strong coupling between exothermic chemistry, temperature, and divergent secondary shockwaves emanating from the collapsing void. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  12. Quantifying learning in medical students during a critical care medicine elective: a comparison of three evaluation instruments.

    PubMed

    Rogers, P L; Jacob, H; Rashwan, A S; Pinsky, M R

    2001-06-01

    To compare three different evaluative instruments and determine which is able to measure different aspects of medical student learning. Student learning was evaluated by using written examinations, objective structured clinical examination, and patient simulator that used two clinical scenarios before and after a structured critical care elective, by using a crossover design. Twenty-four 4th-yr students enrolled in the critical care medicine elective. All students took a multiple-choice written examination; evaluated a live simulated critically ill patient, requested data from a nurse, and intervened as appropriate at different stations (objective structured clinical examination); and evaluated the computer-controlled patient simulator and intervened as appropriate. Students' knowledge was assessed by using a multiple-choice examination containing the same data incorporated into the other examinations. Student performance on the objective structured clinical examination was evaluated at five stations. Both objective structured clinical examination and simulator tests were videotaped for subsequent scores of responses, quality of responses, and response time. The videotapes were reviewed for specific behaviors by faculty masked to time of examination. Students were expected to perform the following: a) assess airway, breathing, and circulation; b) prepare a mannequin for intubation; c) provide appropriate ventilator settings; d) manage hypotension; and e) request, interpret, and provide appropriate intervention for pulmonary artery catheter data. Students were expected to perform identical behaviors during the simulator examination; however, the entire examination was performed on the whole-body computer-controlled mannequin. The primary outcome measure was the difference in examination scores before and after the rotation. The mean preelective scores were 77 +/- 16%, 47 +/- 15%, and 41 +/- 14% for the written examination, objective structured clinical examination, and simulator, respectively, compared with 89 +/- 11%, 76 +/- 12%, and 62 +/- 15% after the elective (p <.0001). Prerotation scores for the written examination were significantly higher than the objective structured clinical examination or the simulator; postrotation scores were highest for the written examination and lowest for the simulator. Written examinations measure acquisition of knowledge but fail to predict if students can apply knowledge to problem solving, whereas both the objective structured clinical examination and the computer-controlled patient simulator can be used as effective performance evaluation tools.

  13. Connecting source aggregating areas with distributive regions via Optimal Transportation theory.

    NASA Astrophysics Data System (ADS)

    Lanzoni, S.; Putti, M.

    2016-12-01

    We study the application of Optimal Transport (OT) theory to the transfer of water and sediments from a distributed aggregating source to a distributing area connected by a erodible hillslope. Starting from the Monge-Kantorovich equations, We derive a global energy functional that nonlinearly combines the cost of constructing the drainage network over the entire domain and the cost of water and sediment transportation through the network. It can be shown that the minimization of this functional is equivalent to the infinite time solution of a system of diffusion partial differential equations coupled with transient ordinary differential equations, that closely resemble the classical conservation laws of water and sediments mass and momentum. We present several numerical simulations applied to realstic test cases. For example, the solution of the proposed model forms network configurations that share strong similiratities with rill channels formed on an hillslope. At a larger scale, we obtain promising results in simulating the network patterns that ensure a progressive and continuous transition from a drainage drainage area to a distributive receiving region.

  14. Measurement of the shell decompression in direct-drive inertial-confinement-fusion implosions

    DOE PAGES

    Michel, D. T.; Hu, S. X.; Davis, A. K.; ...

    2017-05-10

    Measurement of the effect of adiabat (α) on the shell thickness were performed in direct-drive implosions. When reducing the adiabat of the shell from α = 6 to α = 4:5, the shell thickness was measured to decrease from 75 μm to 60 μm, but when decreasing the adiabat further (α = 1:8), the shell thickness was measured to increase to 75 μm. The measured shell thickness, shell trajectories, neutron bang time, and neutron yield were reproduced by two dimensional simulations that include laser imprint, nonlocal thermal transport, cross-beam energy transfer, and first-principles equation-of-state models. The minimum core size wasmore » measured to decrease from 40 μm to 30 μm, consistent with the reduction of the adiabat from α = 6 to α = 1:8. Simulations that neglected imprint reproduced the measured core size of the entire adiabat scan, but signi cantly underestimate the shell thickness for adiabat below ~3. These results show that the decompression of the shell measured for low-adiabat implosions was a result of laser imprint.« less

  15. Measurement of the shell decompression in direct-drive inertial-confinement-fusion implosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michel, D. T.; Hu, S. X.; Davis, A. K.

    Measurement of the effect of adiabat (α) on the shell thickness were performed in direct-drive implosions. When reducing the adiabat of the shell from α = 6 to α = 4:5, the shell thickness was measured to decrease from 75 μm to 60 μm, but when decreasing the adiabat further (α = 1:8), the shell thickness was measured to increase to 75 μm. The measured shell thickness, shell trajectories, neutron bang time, and neutron yield were reproduced by two dimensional simulations that include laser imprint, nonlocal thermal transport, cross-beam energy transfer, and first-principles equation-of-state models. The minimum core size wasmore » measured to decrease from 40 μm to 30 μm, consistent with the reduction of the adiabat from α = 6 to α = 1:8. Simulations that neglected imprint reproduced the measured core size of the entire adiabat scan, but signi cantly underestimate the shell thickness for adiabat below ~3. These results show that the decompression of the shell measured for low-adiabat implosions was a result of laser imprint.« less

  16. Study of CPM Device used for Rehabilitation and Effective Pain Management Following Knee Alloplasty

    NASA Astrophysics Data System (ADS)

    Trochimczuk, R.; Kuźmierowski, T.; Anchimiuk, P.

    2017-02-01

    This paper defines the design assumptions for the construction of an original demonstration of a CPM device, based on which a solid virtual model will be created in a CAD software environment. The overall dimensions and other input parameters for the design were determined for the entire patient population according to an anatomical atlas of human measures. The medical and physiotherapeutic community were also consulted with respect to the proposed engineering solutions. The virtual model of the CPM device that will be created will be used for computer simulations of changes in motion parameters as a function of time, accounting for loads and static states. The results obtained from computer simulation will be used to confirm the correctness of the design adopted assumptions and of the accepted structure of the CPM mechanism, and potentially to introduce necessary corrections. They will also provide a basis for the development of a control strategy for the laboratory prototype and for the selection of the strategy of the patient's rehabilitation in the future. This paper will be supplemented with identification of directions of further research.

  17. The application of an MPM-MFM method for simulating weapon-target interaction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, X.; Zou, Q.; Zhang, D. Z.

    2005-01-01

    During the past two decades, Los Alamos National Laboratory (LANL) has developed computational algorithms and software for analysis of multiphase flow suitable for high-speed projectile penetration of metallic and nonmetallic materials, using a material point method (MPM)-multiphase flow method (MFM). Recently, ACTA has teamed with LANL to advance a computational algorithm for simulating complex weapon-target interaction for penetrating and exploding munitions, such as tank rounds and artillery shells, as well as non-exploding kinetic energy penetrators. This paper will outline the mathematical basis for the MPM-MFM method as implemented in LANL's CartaBlanca code. CartaBlanca, written entirely in Java using object-oriented design,more » is used to solve complex problems involving (a) failure and penetration of solids, (b) heat transfer, (c) phase change, (d) chemical reactions, and (e) multiphase flow. We will present its application to the penetration of a steel target by a tungsten cylinder and compare results with time-resolved experimental data published by Anderson, et. al., Int. J. Impact Engng., Vol. 16, No. 1, pp. 1-18, 1995.« less

  18. Nonlocal response with local optics

    NASA Astrophysics Data System (ADS)

    Kong, Jiantao; Shvonski, Alexander J.; Kempa, Krzysztof

    2018-04-01

    For plasmonic systems too small for classical, local simulations to be valid, but too large for ab initio calculations to be computationally feasible, we developed a practical approach—a nonlocal-to-local mapping that enables the use of a modified local system to obtain the response due to nonlocal effects to lowest order, at the cost of higher structural complexity. In this approach, the nonlocal surface region of a metallic structure is mapped onto a local dielectric film, mathematically preserving the nonlocality of the entire system. The most significant feature of this approach is its full compatibility with conventional, highly efficient finite difference time domain (FDTD) simulation codes. Our optimized choice of mapping is based on the Feibelman's d -function formalism, and it produces an effective dielectric function of the local film that obeys all required sum rules, as well as the Kramers-Kronig causality relations. We demonstrate the power of our approach combined with an FDTD scheme, in a series of comparisons with experiments and ab initio density functional theory calculations from the literature, for structures with dimensions from the subnanoscopic to microscopic range.

  19. Study of the Cooldown and Warmup for the Eight Sectors of the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Liu, L.; Riddone, G.; Tavian, L.

    2004-06-01

    The LHC cryogenic system is based on a five-point feed scheme with eight refrigerators serving the eight sectors of the LHC machine. The paper presents the simplified flow scheme of the eight sectors and the mathematical methods including the program flowchart and the boundary conditions to simulate the cooldown and warmup of these sectors. The methods take into account the effect of the pressure drop across the valves as well as the pressure evolution in the different headers of the cryogenic distribution line. The simulated pressure and temperature profiles of headers of the LHC sector during the cooldown and warmup are given and the temperature evolutions of entire processes of cooldown and warmup are presented. As a conclusion, the functions of the input temperature for the normal and fast cooldown and warmup, the cooldown and warmup time of each sector and the distributions of mass flow rates in each sector are summarized. The results indicate that it is possible to cool down any of the LHC sector within 12.7 days in normal operation and 6.8 days in case of fast operation.

  20. Integrated Tokamak modeling: When physics informs engineering and research planning

    NASA Astrophysics Data System (ADS)

    Poli, Francesca Maria

    2018-05-01

    Modeling tokamaks enables a deeper understanding of how to run and control our experiments and how to design stable and reliable reactors. We model tokamaks to understand the nonlinear dynamics of plasmas embedded in magnetic fields and contained by finite size, conducting structures, and the interplay between turbulence, magneto-hydrodynamic instabilities, and wave propagation. This tutorial guides through the components of a tokamak simulator, highlighting how high-fidelity simulations can guide the development of reduced models that can be used to understand how the dynamics at a small scale and short time scales affects macroscopic transport and global stability of plasmas. It discusses the important role that reduced models have in the modeling of an entire plasma discharge from startup to termination, the limits of these models, and how they can be improved. It discusses the important role that efficient workflows have in the coupling between codes, in the validation of models against experiments and in the verification of theoretical models. Finally, it reviews the status of integrated modeling and addresses the gaps and needs towards predictions of future devices and fusion reactors.

Top