Sample records for large simulation study

  1. Optimizing for Large Planar Fractures in Multistage Horizontal Wells in Enhanced Geothermal Systems Using a Coupled Fluid and Geomechanics Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred

    Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less

  2. A qualitative analysis of bus simulator training on transit incidents : a case study in Florida. [Summary].

    DOT National Transportation Integrated Search

    2013-01-01

    The simulator was once a very expensive, large-scale mechanical device for training military pilots or astronauts. Modern computers, linking sophisticated software and large-screen displays, have yielded simulators for the desktop or configured as sm...

  3. Large eddy simulation in a rotary blood pump: Viscous shear stress computation and comparison with unsteady Reynolds-averaged Navier-Stokes simulation.

    PubMed

    Torner, Benjamin; Konnigk, Lucas; Hallier, Sebastian; Kumar, Jitendra; Witte, Matthias; Wurm, Frank-Hendrik

    2018-06-01

    Numerical flow analysis (computational fluid dynamics) in combination with the prediction of blood damage is an important procedure to investigate the hemocompatibility of a blood pump, since blood trauma due to shear stresses remains a problem in these devices. Today, the numerical damage prediction is conducted using unsteady Reynolds-averaged Navier-Stokes simulations. Investigations with large eddy simulations are rarely being performed for blood pumps. Hence, the aim of the study is to examine the viscous shear stresses of a large eddy simulation in a blood pump and compare the results with an unsteady Reynolds-averaged Navier-Stokes simulation. The simulations were carried out at two operation points of a blood pump. The flow was simulated on a 100M element mesh for the large eddy simulation and a 20M element mesh for the unsteady Reynolds-averaged Navier-Stokes simulation. As a first step, the large eddy simulation was verified by analyzing internal dissipative losses within the pump. Then, the pump characteristics and mean and turbulent viscous shear stresses were compared between the two simulation methods. The verification showed that the large eddy simulation is able to reproduce the significant portion of dissipative losses, which is a global indication that the equivalent viscous shear stresses are adequately resolved. The comparison with the unsteady Reynolds-averaged Navier-Stokes simulation revealed that the hydraulic parameters were in agreement, but differences for the shear stresses were found. The results show the potential of the large eddy simulation as a high-quality comparative case to check the suitability of a chosen Reynolds-averaged Navier-Stokes setup and turbulence model. Furthermore, the results lead to suggest that large eddy simulations are superior to unsteady Reynolds-averaged Navier-Stokes simulations when instantaneous stresses are applied for the blood damage prediction.

  4. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  5. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  6. Large-Eddy/Lattice Boltzmann Simulations of Micro-blowing Strategies for Subsonic and Supersonic Drag Control

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    2003-01-01

    This report summarizes the progress made in the first 8 to 9 months of this research. The Lattice Boltzmann Equation (LBE) methodology for Large-eddy Simulations (LES) of microblowing has been validated using a jet-in-crossflow test configuration. In this study, the flow intake is also simulated to allow the interaction to occur naturally. The Lattice Boltzmann Equation Large-eddy Simulations (LBELES) approach is capable of capturing not only the flow features associated with the flow, such as hairpin vortices and recirculation behind the jet, but also is able to show better agreement with experiments when compared to previous RANS predictions. The LBELES is shown to be computationally very efficient and therefore, a viable method for simulating the injection process. Two strategies have been developed to simulate multi-hole injection process as in the experiment. In order to allow natural interaction between the injected fluid and the primary stream, the flow intakes for all the holes have to be simulated. The LBE method is computationally efficient but is still 3D in nature and therefore, there may be some computational penalty. In order to study a large number or holes, a new 1D subgrid model has been developed that will simulate a reduced form of the Navier-Stokes equation in these holes.

  7. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onunkwo, Uzoma

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutualmore » benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.« less

  8. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions

    PubMed Central

    Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.

    2017-01-01

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945

  9. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.

    PubMed

    Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J

    2017-04-12

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.

  10. High performance computing in biology: multimillion atom simulations of nanoscale systems

    PubMed Central

    Sanbonmatsu, K. Y.; Tung, C.-S.

    2007-01-01

    Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nanoscale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988

  11. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    NASA Astrophysics Data System (ADS)

    Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.

    2013-04-01

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation. The results demonstrate that constraining the RCM to the large-scale features in the driving fields improves the overall accuracy of the simulated regional climate, and suggest that in the absence of such a constraint, the RCM will likely misrepresent important large-scale shifts in the atmospheric circulation under a future climate.

  12. Got power? A systematic review of sample size adequacy in health professions education research.

    PubMed

    Cook, David A; Hatala, Rose

    2015-03-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.

  13. A simulation study of Large Area Crop Inventory Experiment (LACIE) technology

    NASA Technical Reports Server (NTRS)

    Ziegler, L. (Principal Investigator); Potter, J.

    1979-01-01

    The author has identified the following significant results. The LACIE performance predictor (LPP) was used to replicate LACIE phase 2 for a 15 year period, using accuracy assessment results for phase 2 error components. Results indicated that the (LPP) simulated the LACIE phase 2 procedures reasonably well. For the 15 year simulation, only 7 of the 15 production estimates were within 10 percent of the true production. The simulations indicated that the acreage estimator, based on CAMS phase 2 procedures, has a negative bias. This bias was too large to support the 90/90 criterion with the CV observed and simulated for the phase 2 production estimator. Results of this simulation study validate the theory that the acreage variance estimator in LACIE was conservative.

  14. Annual Research Briefs: 1995

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This report contains the 1995 annual progress reports of the Research Fellows and students of the Center for Turbulence Research (CTR). In 1995 CTR continued its concentration on the development and application of large-eddy simulation to complex flows, development of novel modeling concepts for engineering computations in the Reynolds averaged framework, and turbulent combustion. In large-eddy simulation, a number of numerical and experimental issues have surfaced which are being addressed. The first group of reports in this volume are on large-eddy simulation. A key finding in this area was the revelation of possibly significant numerical errors that may overwhelm the effects of the subgrid-scale model. We also commissioned a new experiment to support the LES validation studies. The remaining articles in this report are concerned with Reynolds averaged modeling, studies of turbulence physics and flow generated sound, combustion, and simulation techniques. Fundamental studies of turbulent combustion using direct numerical simulations which started at CTR will continue to be emphasized. These studies and their counterparts carried out during the summer programs have had a noticeable impact on combustion research world wide.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzi, Silvio; Hereld, Mark; Insley, Joseph

    In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less

  16. Nonlinear Control of Large Disturbances in Magnetic Bearing Systems

    NASA Technical Reports Server (NTRS)

    Jiang, Yuhong; Zmood, R. B.

    1996-01-01

    In this paper, the nonlinear operation of magnetic bearing control methods is reviewed. For large disturbances, the effects of displacement constraints and power amplifier current and di/dt limits on bearing control system performance are analyzed. The operation of magnetic bearings exhibiting self-excited large scale oscillations have been studied both experimentally and by simulation. The simulation of the bearing system has been extended to include the effects of eddy currents in the actuators, so as to improve the accuracy of the simulation results. The results of these experiments and simulations are compared, and some useful conclusions are drawn for improving bearing system robustness.

  17. State-resolved Thermal/Hyperthermal Dynamics of Atmospheric Species

    DTIC Science & Technology

    2015-06-23

    gas -room temperature ionic liquid (RTIL) interfaces. 2) Large scale trajectory simulations for theoretical analysis of gas - liquid scattering studies...areas: 1) Diode laser and LIF studies of hyperthermal CO2 and NO collisions at the gas -room temperature ionic liquid (RTIL) interfaces. 2) Large...scale trajectory simulations for theoretical analysis of gas - liquid scattering studies, 3) LIF data for state-resolved scattering of hyperthermal NO at

  18. Large Eddy Simulation of Gravitational Effects on Transitional and Turbulent Gas-Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Jaberi, Farhad A.

    2001-01-01

    The basic objective of this work is to assess the influence of gravity on "the compositional and the spatial structures" of transitional and turbulent diffusion flames via large eddy simulation (LES), and direct numerical simulation (DNS). The DNS is conducted for appraisal of the various closures employed in LES, and to study the effect of buoyancy on the small scale flow features. The LES is based on our "filtered mass density function"' (FMDF) model. The novelty of the methodology is that it allows for reliable simulations with inclusion of "realistic physics." It also allows for detailed analysis of the unsteady large scale flow evolution and compositional flame structure which is not usually possible via Reynolds averaged simulations.

  19. Simulation studies using multibody dynamics code DART

    NASA Technical Reports Server (NTRS)

    Keat, James E.

    1989-01-01

    DART is a multibody dynamics code developed by Photon Research Associates for the Air Force Astronautics Laboratory (AFAL). The code is intended primarily to simulate the dynamics of large space structures, particularly during the deployment phase of their missions. DART integrates nonlinear equations of motion numerically. The number of bodies in the system being simulated is arbitrary. The bodies' interconnection joints can have an arbitrary number of degrees of freedom between 0 and 6. Motions across the joints can be large. Provision for simulating on-board control systems is provided. Conservation of energy and momentum, when applicable, are used to evaluate DART's performance. After a brief description of DART, studies made to test the program prior to its delivery to AFAL are described. The first is a large angle reorientating of a flexible spacecraft consisting of a rigid central hub and four flexible booms. Reorientation was accomplished by a single-cycle sine wave shape torque input. In the second study, an appendage, mounted on a spacecraft, was slewed through a large angle. Four closed-loop control systems provided control of this appendage and of the spacecraft's attitude. The third study simulated the deployment of the rim of a bicycle wheel configuration large space structure. This system contained 18 bodies. An interesting and unexpected feature of the dynamics was a pulsing phenomena experienced by the stays whole playout was used to control the deployment. A short description of the current status of DART is given.

  20. Simulator study of flight characteristics of several large, dissimilar, cargo transport airplanes during approach and landing

    NASA Technical Reports Server (NTRS)

    Grantham, W. D.; Smith, P. M.; Deal, P. L.; Neely, W. R., Jr.

    1984-01-01

    A six-degree-of-freedom, ground based simulator study is conducted to evaluate the low-speed flight characteristics of four dissimilar cargo transport airplanes. These characteristics are compared with those of a large, present-day (reference) transport configuration similar to the Lockheed C-5A airplane. The four very large transport concepts evaluated consist of single-fuselage, twin-fuselage, triple-fuselage, and span-loader configurations. The primary piloting task is the approach and landing operation. The results of his study indicate that all four concepts evaluated have unsatisfactory longitudinal and lateral directional low speed flight characteristics and that considerable stability and control augmentation would be required to improve these characteristics (handling qualities) to a satisfactory level. Through the use of rate command/attitude hold augmentation in the pitch and roll axes, and the use of several turn-coordination features, the handling qualities of all four large transports simulated are improved appreciably.

  1. Lagrangian large eddy simulations of boundary layer clouds on ERA-Interim and ERA5 trajectories

    NASA Astrophysics Data System (ADS)

    Kazil, J.; Feingold, G.; Yamaguchi, T.

    2017-12-01

    This exploratory study examines Lagrangian large eddy simulations of boundary layer clouds along wind trajectories from the ERA-Interim and ERA5 reanalyses. The study is motivated by the need for statistically representative sets of high resolution simulations of cloud field evolution in realistic meteorological conditions. The study will serve as a foundation for the investigation of biomass burning effects on the transition from stratocumulus to shallow cumulus clouds in the South-East Atlantic. Trajectories that pass through a location with radiosonde data (St. Helena) and which exhibit a well-defined cloud structure and evolution were identified in satellite imagery, and sea surface temperature and atmospheric vertical profiles along the trajectories were extracted from the reanalysis data sets. The System for Atmospheric Modeling (SAM) simulated boundary layer turbulence and cloud properties along the trajectories. Mean temperature and moisture (in the free troposphere) and mean wind speed (at all levels) were nudged towards the reanalysis data. Atmospheric and cloud properties in the large eddy simulations were compared with those from the reanalysis products, and evaluated with satellite imagery and radiosonde data. Simulations using ERA-Interim data and the higher resolution ERA5 data are contrasted.

  2. Statistical analysis of large simulated yield datasets for studying climate effects

    USDA-ARS?s Scientific Manuscript database

    Ensembles of process-based crop models are now commonly used to simulate crop growth and development for climate scenarios of temperature and/or precipitation changes corresponding to different projections of atmospheric CO2 concentrations. This approach generates large datasets with thousands of de...

  3. Why build a virtual brain? Large-scale neural simulations as jump start for cognitive computing

    NASA Astrophysics Data System (ADS)

    Colombo, Matteo

    2017-03-01

    Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence.

  4. A reduced basis method for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Vincent-Finley, Rachel Elisabeth

    In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.

  5. Simulation of hydrodynamics using large eddy simulation-second-order moment model in circulating fluidized beds

    NASA Astrophysics Data System (ADS)

    Juhui, Chen; Yanjia, Tang; Dan, Li; Pengfei, Xu; Huilin, Lu

    2013-07-01

    Flow behavior of gas and particles is predicted by the large eddy simulation of gas-second order moment of solid model (LES-SOM model) in the simulation of flow behavior in CFB. This study shows that the simulated solid volume fractions along height using a two-dimensional model are in agreement with experiments. The velocity, volume fraction and second-order moments of particles are computed. The second-order moments of clusters are calculated. The solid volume fraction, velocity and second order moments are compared at the three different model constants.

  6. Large-eddy simulation of the urban boundary layer in the MEGAPOLI Paris Plume experiment

    NASA Astrophysics Data System (ADS)

    Esau, Igor

    2010-05-01

    This study presents results from the specific large-eddy simulation study of the urban boundary layer in the MEGAPOLI Paris Plume field campaign. We used LESNIC and PALM codes, MEGAPOLI city morphology database, nudging to the observed meteorological conditions during the Paris Plume campaign and some concentration measurements from that campaign to simulate and better understand the nature of the urban boundary layer on scales larger then the street canyon scales. The primary attention was paid to turbulence self-organization and structure-to-surface interaction. The study has been aimed to demonstrate feasibility and estimate required resources for such research. Therefore, at this stage we do not compare the simulation with other relevant studies as well as we do not formulate the theoretical conclusions.

  7. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  8. Computer Simulation in Mass Emergency and Disaster Response: An Evaluation of Its Effectiveness as a Tool for Demonstrating Strategic Competency in Emergency Department Medical Responders

    ERIC Educational Resources Information Center

    O'Reilly, Daniel J.

    2011-01-01

    This study examined the capability of computer simulation as a tool for assessing the strategic competency of emergency department nurses as they responded to authentically computer simulated biohazard-exposed patient case studies. Thirty registered nurses from a large, urban hospital completed a series of computer-simulated case studies of…

  9. An effective online data monitoring and saving strategy for large-scale climate simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin

    Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less

  10. An effective online data monitoring and saving strategy for large-scale climate simulations

    DOE PAGES

    Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin; ...

    2018-01-22

    Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less

  11. High Fidelity Simulations of Large-Scale Wireless Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onunkwo, Uzoma; Benz, Zachary

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulationsmore » (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.« less

  12. Large Scale Simulation Platform for NODES Validation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sotorrio, P.; Qin, Y.; Min, L.

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and lightmore » commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.« less

  13. Molecular dynamics simulations of large macromolecular complexes.

    PubMed

    Perilla, Juan R; Goh, Boon Chong; Cassidy, C Keith; Liu, Bo; Bernardi, Rafael C; Rudack, Till; Yu, Hang; Wu, Zhe; Schulten, Klaus

    2015-04-01

    Connecting dynamics to structural data from diverse experimental sources, molecular dynamics simulations permit the exploration of biological phenomena in unparalleled detail. Advances in simulations are moving the atomic resolution descriptions of biological systems into the million-to-billion atom regime, in which numerous cell functions reside. In this opinion, we review the progress, driven by large-scale molecular dynamics simulations, in the study of viruses, ribosomes, bioenergetic systems, and other diverse applications. These examples highlight the utility of molecular dynamics simulations in the critical task of relating atomic detail to the function of supramolecular complexes, a task that cannot be achieved by smaller-scale simulations or existing experimental approaches alone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  15. Coniferous canopy BRF simulation based on 3-D realistic scene.

    PubMed

    Wang, Xin-Yun; Guo, Zhi-Feng; Qin, Wen-Han; Sun, Guo-Qing

    2011-09-01

    It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigated in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerful in remote sensing of heterogeneous coniferous forests over a large-scale region. L-systems is applied to render 3-D coniferous forest scenarios, and RGM model was used to calculate BRF (bidirectional reflectance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhile at a tree and forest level, the results are also good.

  16. Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene

    NASA Technical Reports Server (NTRS)

    Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing

    2011-01-01

    It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.

  17. Lattice Boltzmann method for simulating the viscous flow in large distensible blood vessels

    NASA Astrophysics Data System (ADS)

    Fang, Haiping; Wang, Zuowei; Lin, Zhifang; Liu, Muren

    2002-05-01

    A lattice Boltzmann method for simulating the viscous flow in large distensible blood vessels is presented by introducing a boundary condition for elastic and moving boundaries. The mass conservation for the boundary condition is tested in detail. The viscous flow in elastic vessels is simulated with a pressure-radius relationship similar to that of the pulmonary blood vessels. The numerical results for steady flow agree with the analytical prediction to very high accuracy, and the simulation results for pulsatile flow are comparable with those of the aortic flows observed experimentally. The model is expected to find many applications for studying blood flows in large distensible arteries, especially in those suffering from atherosclerosis, stenosis, aneurysm, etc.

  18. Experimental study and large eddy simulation of effect of terrain slope on marginal burning in shrub fuel beds

    Treesearch

    Xiangyang Zhou; Shankar Mahalingam; David Weise

    2007-01-01

    This paper presents a combined study of laboratory scale fire spread experiments and a three-dimensional large eddy simulation (LES) to analyze the effect of terrain slope on marginal burning behavior in live chaparral shrub fuel beds. Line fire was initiated in single species fuel beds of four common chaparral plants under various fuel bed configurations and ambient...

  19. Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sawyer, Darren Charles

    1994-01-01

    The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.

  20. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  1. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    NASA Astrophysics Data System (ADS)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p < 0.01) in the Everglades of Florida over the years 1996-2001. The modeled monthly soil moisture for Illinois of the US agrees well (R2 = 0.79, p < 0.01) with the observed over the years 1984-2001. The modeled monthly stream flow for most 12 major rivers in the US is consistent R2 > 0.46, p < 0.01; Nash-Sutcliffe Coefficients >0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  2. Evaluating the Performance of the Goddard Multi-Scale Modeling Framework against GPM, TRMM and CloudSat/CALIPSO Products

    NASA Astrophysics Data System (ADS)

    Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.

    2014-12-01

    Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.

  3. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware

    PubMed Central

    Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061

  4. Wall-Resolved Large-Eddy Simulation of Flow Separation Over NASA Wall-Mounted Hump

    NASA Technical Reports Server (NTRS)

    Uzun, Ali; Malik, Mujeeb R.

    2017-01-01

    This paper reports the findings from a study that applies wall-resolved large-eddy simulation to investigate flow separation over the NASA wall-mounted hump geometry. Despite its conceptually simple flow configuration, this benchmark problem has proven to be a challenging test case for various turbulence simulation methods that have attempted to predict flow separation arising from the adverse pressure gradient on the aft region of the hump. The momentum-thickness Reynolds number of the incoming boundary layer has a value that is near the upper limit achieved by recent direct numerical simulation and large-eddy simulation of incompressible turbulent boundary layers. The high Reynolds number of the problem necessitates a significant number of grid points for wall-resolved calculations. The present simulations show a significant improvement in the separation-bubble length prediction compared to Reynolds-Averaged Navier-Stokes calculations. The current simulations also provide good overall prediction of the skin-friction distribution, including the relaminarization observed over the front portion of the hump due to the strong favorable pressure gradient. We discuss a number of problems that were encountered during the course of this work and present possible solutions. A systematic study regarding the effect of domain span, subgrid-scale model, tunnel back pressure, upstream boundary layer conditions and grid refinement is performed. The predicted separation-bubble length is found to be sensitive to the span of the domain. Despite the large number of grid points used in the simulations, some differences between the predictions and experimental observations still exist (particularly for Reynolds stresses) in the case of the wide-span simulation, suggesting that additional grid resolution may be required.

  5. Transferability of optimally-selected climate models in the quantification of climate change impacts on hydrology

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe

    2016-11-01

    Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.

  6. Galaxy clusters in local Universe simulations without density constraints: a long uphill struggle

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.

    2018-06-01

    Galaxy clusters are excellent cosmological probes provided that their formation and evolution within the large scale environment are precisely understood. Therefore studies with simulated galaxy clusters have flourished. However detailed comparisons between simulated and observed clusters and their population - the galaxies - are complicated by the diversity of clusters and their surrounding environment. An original way initiated by Bertschinger as early as 1987, to legitimize the one-to-one comparison exercise down to the details, is to produce simulations constrained to resemble the cluster under study within its large scale environment. Subsequently several methods have emerged to produce simulations that look like the local Universe. This paper highlights one of these methods and its essential steps to get simulations that not only resemble the local Large Scale Structure but also that host the local clusters. It includes a new modeling of the radial peculiar velocity uncertainties to remove the observed correlation between the decreases of the simulated cluster masses and of the amount of data used as constraints with the distance from us. This method has the particularity to use solely radial peculiar velocities as constraints: no additional density constraints are required to get local cluster simulacra. The new resulting simulations host dark matter halos that match the most prominent local clusters such as Coma. Zoom-in simulations of the latter and of a volume larger than the 30h-1 Mpc radius inner sphere become now possible to study local clusters and their effects. Mapping the local Sunyaev-Zel'dovich and Sachs-Wolfe effects can follow.

  7. Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis.

    PubMed

    Ilgen, Jonathan S; Sherbino, Jonathan; Cook, David A

    2013-02-01

    Technology-enhanced simulation is used frequently in emergency medicine (EM) training programs. Evidence for its effectiveness, however, remains unclear. The objective of this study was to evaluate the effectiveness of technology-enhanced simulation for training in EM and identify instructional design features associated with improved outcomes by conducting a systematic review. The authors systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Original research articles in any language were selected if they compared simulation to no intervention or another educational activity for the purposes of training EM health professionals (including student and practicing physicians, midlevel providers, nurses, and prehospital providers). Reviewers evaluated study quality and abstracted information on learners, instructional design (curricular integration, feedback, repetitive practice, mastery learning), and outcomes. From a collection of 10,903 articles, 85 eligible studies enrolling 6,099 EM learners were identified. Of these, 56 studies compared simulation to no intervention, 12 compared simulation with another form of instruction, and 19 compared two forms of simulation. Effect sizes were pooled using a random-effects model. Heterogeneity among these studies was large (I(2) ≥ 50%). Among studies comparing simulation to no intervention, pooled effect sizes were large (range = 1.13 to 1.48) for knowledge, time, and skills and small to moderate for behaviors with patients (0.62) and patient effects (0.43; all p < 0.02 except patient effects p = 0.12). Among comparisons between simulation and other forms of instruction, the pooled effect sizes were small (≤ 0.33) for knowledge, time, and process skills (all p > 0.1). Qualitative comparisons of different simulation curricula are limited, although feedback, mastery learning, and higher fidelity were associated with improved learning outcomes. Technology-enhanced simulation for EM learners is associated with moderate or large favorable effects in comparison with no intervention and generally small and nonsignificant benefits in comparison with other instruction. Future research should investigate the features that lead to effective simulation-based instructional design. © 2013 by the Society for Academic Emergency Medicine.

  8. Large-eddy simulations of a Salt Lake Valley cold-air pool

    NASA Astrophysics Data System (ADS)

    Crosman, Erik T.; Horel, John D.

    2017-09-01

    Persistent cold-air pools are often poorly forecast by mesoscale numerical weather prediction models, in part due to inadequate parameterization of planetary boundary-layer physics in stable atmospheric conditions, and also because of errors in the initialization and treatment of the model surface state. In this study, an improved numerical simulation of the 27-30 January 2011 cold-air pool in Utah's Great Salt Lake Basin is obtained using a large-eddy simulation with more realistic surface state characterization. Compared to a Weather Research and Forecasting model configuration run as a mesoscale model with a planetary boundary-layer scheme where turbulence is highly parameterized, the large-eddy simulation more accurately captured turbulent interactions between the stable boundary-layer and flow aloft. The simulations were also found to be sensitive to variations in the Great Salt Lake temperature and Salt Lake Valley snow cover, illustrating the importance of land surface state in modelling cold-air pools.

  9. Large blast and thermal simulator advanced concept driver design by computational fluid dynamics. Final report, 1987-1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opalka, K.O.

    1989-08-01

    The construction of a large test facility has been proposed for simulating the blast and thermal environment resulting from nuclear explosions. This facility would be used to test the survivability and vulnerability of military equipment such as trucks, tanks, and helicopters in a simulated thermal and blast environment, and to perform research into nuclear blast phenomenology. The proposed advanced design concepts, heating of driver gas and fast-acting throat valves for wave shaping, are described and the results of CFD studies to advance these new technical concepts fro simulating decaying blast waves are reported.

  10. SU-E-T-586: Field Size Dependence of Output Factor for Uniform Scanning Proton Beams: A Comparison of TPS Calculation, Measurement and Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Y; Singh, H; Islam, M

    2014-06-01

    Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to amore » 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.« less

  11. Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise

    NASA Astrophysics Data System (ADS)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2014-11-01

    Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.

  12. Interprofessional simulated learning: short-term associations between simulation and interprofessional collaboration

    PubMed Central

    2011-01-01

    Background Health professions education programs use simulation for teaching and maintaining clinical procedural skills. Simulated learning activities are also becoming useful methods of instruction for interprofessional education. The simulation environment for interprofessional training allows participants to explore collaborative ways of improving communicative aspects of clinical care. Simulation has shown communication improvement within and between health care professions, but the impacts of teamwork simulation on perceptions of others' interprofessional practices and one's own attitudes toward teamwork are largely unknown. Methods A single-arm intervention study tested the association between simulated team practice and measures of interprofessional collaboration, nurse-physician relationships, and attitudes toward health care teams. Participants were 154 post-licensure nurses, allied health professionals, and physicians. Self- and proxy-report survey measurements were taken before simulation training and two and six weeks after. Results Multilevel modeling revealed little change over the study period. Variation in interprofessional collaboration and attitudes was largely attributable to between-person characteristics. A constructed categorical variable indexing 'leadership capacity' found that participants with highest and lowest values were more likely to endorse shared team leadership over physician centrality. Conclusion Results from this study indicate that focusing interprofessional simulation education on shared leadership may provide the most leverage to improve interprofessional care. PMID:21443779

  13. A Very Large Eddy Simulation of the Nonreacting Flow in a Single-Element Lean Direct Injection Combustor Using PRNS with a Nonlinear Subscale Model

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2009-01-01

    Very large eddy simulation (VLES) of the nonreacting turbulent flow in a single-element lean direct injection (LDI) combustor has been successfully performed via the approach known as the partially resolved numerical simulation (PRNS/VLES) using a nonlinear subscale model. The grid is the same as the one used in a previous RANS simulation, which was considered as too coarse for a traditional LES simulation. In this study, we first carry out a steady RANS simulation to provide the initial flow field for the subsequent PRNS/VLES simulation. We have also carried out an unsteady RANS (URANS) simulation for the purpose of comparing its results with that of the PRNS/VLES simulation. In addition, these calculated results are compared with the experimental data. The present effort has demonstrated that the PRNS/VLES approach, while using a RANS type of grid, is able to reveal the dynamically important, unsteady large-scale turbulent structures occurring in the flow field of a single-element LDI combustor. The interactions of these coherent structures play a critical role in the dispersion of the fuel, hence, the mixing between the fuel and the oxidizer in a combustor.

  14. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    PubMed

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  15. Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations

    NASA Astrophysics Data System (ADS)

    Choi, Suk-Jin; Lee, Dong-Kyou

    2016-06-01

    This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.

  16. Replicable Interprofessional Competency Outcomes from High-Volume, Inter-Institutional, Interprofessional Simulation

    PubMed Central

    Bambini, Deborah; Emery, Matthew; de Voest, Margaret; Meny, Lisa; Shoemaker, Michael J.

    2016-01-01

    There are significant limitations among the few prior studies that have examined the development and implementation of interprofessional education (IPE) experiences to accommodate a high volume of students from several disciplines and from different institutions. The present study addressed these gaps by seeking to determine the extent to which a single, large, inter-institutional, and IPE simulation event improves student perceptions of the importance and relevance of IPE and simulation as a learning modality, whether there is a difference in students’ perceptions among disciplines, and whether the results are reproducible. A total of 290 medical, nursing, pharmacy, and physical therapy students participated in one of two large, inter-institutional, IPE simulation events. Measurements included student perceptions about their simulation experience using the Attitude Towards Teamwork in Training Undergoing Designed Educational Simulation (ATTITUDES) Questionnaire and open-ended questions related to teamwork and communication. Results demonstrated a statistically significant improvement across all ATTITUDES subscales, while time management, role confusion, collaboration, and mutual support emerged as significant themes. Results of the present study indicate that a single IPE simulation event can reproducibly result in significant and educationally meaningful improvements in student perceptions towards teamwork, IPE, and simulation as a learning modality. PMID:28970407

  17. Large Eddy Simulation of Flame-Turbulence Interactions in a LOX-CH4 Shear Coaxial Injector

    DTIC Science & Technology

    2012-01-01

    heat transfer from dense to light fluids.A previous study on LOX/H2 flames39,40 have pointed the limitations of central scheme to predict such large...pp. 151–169. 39Masquelet, M., Simulations of a Sub-scale Liquid Rocket Engine: Transient Heat Transfer in a Real Gas Environment , Master’s thesis...Eddy Simulation of a cryogenic flame issued from a LOX-CH4 shear coaxial injector. The operating pressure is above the critical pressure for both

  18. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  19. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.

    1990-01-01

    This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.

  20. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations

    PubMed Central

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028

  1. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.

    PubMed

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.

  2. Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids

    NASA Astrophysics Data System (ADS)

    Sezer-Uzol, Nilay

    In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.

  3. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  4. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  5. Physical properties of the HIV-1 capsid from all-atom molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Perilla, Juan R.; Schulten, Klaus

    2017-07-01

    Human immunodeficiency virus type 1 (HIV-1) infection is highly dependent on its capsid. The capsid is a large container, made of ~1,300 proteins with altogether 4 million atoms. Although the capsid proteins are all identical, they nevertheless arrange themselves into a largely asymmetric structure made of hexamers and pentamers. The large number of degrees of freedom and lack of symmetry pose a challenge to studying the chemical details of the HIV capsid. Simulations of over 64 million atoms for over 1 μs allow us to conduct a comprehensive study of the chemical-physical properties of an empty HIV-1 capsid, including its electrostatics, vibrational and acoustic properties, and the effects of solvent (ions and water) on the capsid. The simulations reveal critical details about the capsid with implications to biological function.

  6. Large-eddy simulation using the finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulencemore » modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.« less

  7. Large-scale Density Structures in Magneto-rotational Disk Turbulence

    NASA Astrophysics Data System (ADS)

    Youdin, Andrew; Johansen, A.; Klahr, H.

    2009-01-01

    Turbulence generated by the magneto-rotational instability (MRI) is a strong candidate to drive accretion flows in disks, including sufficiently ionized regions of protoplanetary disks. The MRI is often studied in local shearing boxes, which model a small section of the disk at high resolution. I will present simulations of large, stratified shearing boxes which extend up to 10 gas scale-heights across. These simulations are a useful bridge to fully global disk simulations. We find that MRI turbulence produces large-scale, axisymmetric density perturbations . These structures are part of a zonal flow --- analogous to the banded flow in Jupiter's atmosphere --- which survives in near geostrophic balance for tens of orbits. The launching mechanism is large-scale magnetic tension generated by an inverse cascade. We demonstrate the robustness of these results by careful study of various box sizes, grid resolutions, and microscopic diffusion parameterizations. These gas structures can trap solid material (in the form of large dust or ice particles) with important implications for planet formation. Resolved disk images at mm-wavelengths (e.g. from ALMA) will verify or constrain the existence of these structures.

  8. An Analytical Comparison of the Fidelity of "Large Motion" Versus "Small Motion" Flight Simulators in a Rotorcraft Side-Step Task

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1999-01-01

    This paper presents an analytical and experimental methodology for studying flight simulator fidelity. The task was a rotorcraft bob-up/down maneuver in which vertical acceleration constituted the motion cue. The task considered here is aside-step maneuver that differs from the bob-up one important way: both roll and lateral acceleration cues are available to the pilot. It has been communicated to the author that in some Verticle Motion Simulator (VMS) studies, the lateral acceleration cue has been found to be the most important. It is of some interest to hypothesize how this motion cue associated with "outer-loop" lateral translation fits into the modeling procedure where only "inner-loop " motion cues were considered. This Note is an attempt at formulating such an hypothesis and analytically comparing a large-motion simulator, e.g., the VMS, with a small-motion simulator, e.g., a hexapod.

  9. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  10. JSC-1: A new lunar regolith simulant

    NASA Technical Reports Server (NTRS)

    Mckay, David S.; Carter, James L.; Boles, Walter W.; Allen, Carlton C.; Allton, Judith H.

    1993-01-01

    Simulants of lunar rocks and soils with appropriate properties, although difficult to produce in some cases, will be essential to meeting the system requirements for lunar exploration. In order to address this need a new lunar regolith simulant, JSC-1, has been developed. JSC-1 is a glass-rich basaltic ash which approximates the bulk chemical composition and mineralogy of some lunar soils. It has been ground to produce a gain size distribution approximating that of lunar regolith samples. The simulant is available in large quantities (greater than 2000 lb; 907 kg). JSC-1 was produced specifically for large- and medium-scale engineering studies in support of future human activities on the Moon. Such studies include material handling, construction, excavation, and transportation. The simulant is also appropriate for research on dust control and spacesuit durability. JSC-1 can be used as a chemical or mineralogical analog to some lunar soils for resource studies such as oxygen or metal production, sintering, and radiation shielding.

  11. Supermassive Black Hole Binaries in High Performance Massively Parallel Direct N-body Simulations on Large GPU Clusters

    NASA Astrophysics Data System (ADS)

    Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.

    2012-07-01

    Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.

  12. Performance of informative priors skeptical of large treatment effects in clinical trials: A simulation study.

    PubMed

    Pedroza, Claudia; Han, Weilu; Thanh Truong, Van Thi; Green, Charles; Tyson, Jon E

    2018-01-01

    One of the main advantages of Bayesian analyses of clinical trials is their ability to formally incorporate skepticism about large treatment effects through the use of informative priors. We conducted a simulation study to assess the performance of informative normal, Student- t, and beta distributions in estimating relative risk (RR) or odds ratio (OR) for binary outcomes. Simulation scenarios varied the prior standard deviation (SD; level of skepticism of large treatment effects), outcome rate in the control group, true treatment effect, and sample size. We compared the priors with regards to bias, mean squared error (MSE), and coverage of 95% credible intervals. Simulation results show that the prior SD influenced the posterior to a greater degree than the particular distributional form of the prior. For RR, priors with a 95% interval of 0.50-2.0 performed well in terms of bias, MSE, and coverage under most scenarios. For OR, priors with a wider 95% interval of 0.23-4.35 had good performance. We recommend the use of informative priors that exclude implausibly large treatment effects in analyses of clinical trials, particularly for major outcomes such as mortality.

  13. TOWARDS ICE FORMATION CLOSURE IN MIXED-PHASE BOUNDARY LAYER CLOUDS DURING ISDAC

    NASA Astrophysics Data System (ADS)

    Avramov, A.; Ackerman, A. S.; Fridlind, A. M.; van Diedenhoven, B.; Korolev, A. V.

    2009-12-01

    Mixed-phase stratus clouds are ubiquitous in the Arctic during the winter and transition seasons. Despite their important role in various climate feedback mechanisms they are not well understood and are difficult to represent faithfully in cloud models. In particular, models of all types experience difficulties reproducing observed ice concentrations and liquid/ice water partitioning in these clouds. Previous studies have demonstrated that simulated ice concentrations and ice water content are critically dependent on ice nucleation modes and ice crystal habit assumed in simulations. In this study we use large-eddy simulations with size-resolved microphysics to determine whether uncertainties in ice nucleus concentrations, ice nucleation mechanisms, ice crystal habits and large-scale forcing are sufficient to account for the difference between simulated and observed quantities. We present results of simulations of two case studies based on observations taken during the recent Indirect and Semi-Direct Aerosol Campaign (ISDAC) on April 8 and 26, 2008. The model simulations are evaluated through extensive comparison with in-situ observations and ground-based remote sensing measurements.

  14. Simulation training for breast and pelvic physical examination: a systematic review and meta-analysis.

    PubMed

    Dilaveri, C A; Szostek, J H; Wang, A T; Cook, D A

    2013-09-01

    Breast and pelvic examinations are challenging intimate examinations. Technology-based simulation may help to overcome these challenges. To synthesise the evidence regarding the effectiveness of technology-based simulation training for breast and pelvic examination. Our systematic search included MEDLINE, EMBASE, CINAHL, PsychINFO, Scopus, and key journals and review articles; the date of the last search was January 2012. Original research studies evaluating technology-enhanced simulation of breast and pelvic examination to teach learners, compared with no intervention or with other educational activities. The reviewers evaluated study eligibility and abstracted data on methodological quality, learners, instructional design, and outcomes, and used random-effects models to pool weighted effect sizes. In total, 11 272 articles were identified for screening, and 22 studies were eligible, enrolling 2036 trainees. In eight studies comparing simulation for breast examination training with no intervention, simulation was associated with a significant improvement in skill, with a pooled effect size of 0.86 (95% CI 0.52-1.19; P < 0.001). Four studies comparing simulation training for pelvic examination with no intervention had a large and significant benefit, with a pooled effect size of 1.18 (95% CI 0.40-1.96; P = 0.003). Among breast examination simulation studies, dynamic models providing feedback were associated with improved outcomes. In pelvic examination simulation studies, the addition of a standardised patient to the simulation model and the use of an electronic model with enhanced feedback improved outcomes. In comparison with no intervention, breast and pelvic examination simulation training is associated with moderate to large effects for skills outcomes. Enhanced feedback appears to improve learning. © 2013 RCOG.

  15. Study report on interfacing major physiological subsystem models: An approach for developing a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.; Grounds, D. J.; Leonard, J. I.

    1975-01-01

    Using a whole body algorithm simulation model, a wide variety and large number of stresses as well as different stress levels were simulated including environmental disturbances, metabolic changes, and special experimental situations. Simulation of short term stresses resulted in simultaneous and integrated responses from the cardiovascular, respiratory, and thermoregulatory subsystems and the accuracy of a large number of responding variables was verified. The capability of simulating significantly longer responses was demonstrated by validating a four week bed rest study. In this case, the long term subsystem model was found to reproduce many experimentally observed changes in circulatory dynamics, body fluid-electrolyte regulation, and renal function. The value of systems analysis and the selected design approach for developing a whole body algorithm was demonstrated.

  16. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    NASA Astrophysics Data System (ADS)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaurov, Alexander A., E-mail: kaurov@uchicago.edu

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less

  18. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  19. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  20. A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, Laura; Jakob, Christian; Cheung, K.

    2013-06-27

    Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less

  1. Uncertainties of Large-Scale Forcing Caused by Surface Turbulence Flux Measurements and the Impacts on Cloud Simulations at the ARM SGP Site

    NASA Astrophysics Data System (ADS)

    Tang, S.; Xie, S.; Tang, Q.; Zhang, Y.

    2017-12-01

    Two types of instruments, the eddy correlation flux measurement system (ECOR) and the energy balance Bowen ratio system (EBBR), are used at the Atmospheric Radiation Measurement (ARM) program Southern Great Plains (SGP) site to measure surface latent and sensible fluxes. ECOR and EBBR typically sample different land surface types, and the domain-mean surface fluxes derived from ECOR and EBBR are not always consistent. The uncertainties of the surface fluxes will have impacts on the derived large-scale forcing data and further affect the simulations of single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulation models (LES), especially for the shallow-cumulus clouds which are mainly driven by surface forcing. This study aims to quantify the uncertainties of the large-scale forcing caused by surface turbulence flux measurements and investigate the impacts on cloud simulations using long-term observations from the ARM SGP site.

  2. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  3. Studying Turbulence Using Numerical Simulation Databases - X Proceedings of the 2004 Summer Program

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Mansour, Nagi N.

    2004-01-01

    This Proceedings volume contains 32 papers that span a wide range of topics that reflect the ubiquity of turbulence. The papers have been divided into six groups: 1) Solar Simulations; 2) Magnetohydrodynamics (MHD); 3) Large Eddy Simulation (LES) and Numerical Simulations; 4) Reynolds Averaged Navier Stokes (RANS) Modeling and Simulations; 5) Stability and Acoustics; 6) Combustion and Multi-Phase Flow.

  4. Annual Research Briefs

    NASA Technical Reports Server (NTRS)

    Spinks, Debra (Compiler)

    1997-01-01

    This report contains the 1997 annual progress reports of the research fellows and students supported by the Center for Turbulence Research (CTR). Titles include: Invariant modeling in large-eddy simulation of turbulence; Validation of large-eddy simulation in a plain asymmetric diffuser; Progress in large-eddy simulation of trailing-edge turbulence and aeronautics; Resolution requirements in large-eddy simulations of shear flows; A general theory of discrete filtering for LES in complex geometry; On the use of discrete filters for large eddy simulation; Wall models in large eddy simulation of separated flow; Perspectives for ensemble average LES; Anisotropic grid-based formulas for subgrid-scale models; Some modeling requirements for wall models in large eddy simulation; Numerical simulation of 3D turbulent boundary layers using the V2F model; Accurate modeling of impinging jet heat transfer; Application of turbulence models to high-lift airfoils; Advances in structure-based turbulence modeling; Incorporating realistic chemistry into direct numerical simulations of turbulent non-premixed combustion; Effects of small-scale structure on turbulent mixing; Turbulent premixed combustion in the laminar flamelet and the thin reaction zone regime; Large eddy simulation of combustion instabilities in turbulent premixed burners; On the generation of vorticity at a free-surface; Active control of turbulent channel flow; A generalized framework for robust control in fluid mechanics; Combined immersed-boundary/B-spline methods for simulations of flow in complex geometries; and DNS of shock boundary-layer interaction - preliminary results for compression ramp flow.

  5. Simulation of Spiral Waves and Point Sources in Atrial Fibrillation with Application to Rotor Localization

    PubMed Central

    Ganesan, Prasanth; Shillieto, Kristina E.; Ghoraani, Behnaz

    2018-01-01

    Cardiac simulations play an important role in studies involving understanding and investigating the mechanisms of cardiac arrhythmias. Today, studies of arrhythmogenesis and maintenance are largely being performed by creating simulations of a particular arrhythmia with high accuracy comparable to the results of clinical experiments. Atrial fibrillation (AF), the most common arrhythmia in the United States and many other parts of the world, is one of the major field where simulation and modeling is largely used. AF simulations not only assist in understanding its mechanisms but also help to develop, evaluate and improve the computer algorithms used in electrophysiology (EP) systems for ablation therapies. In this paper, we begin with a brief overeview of some common techniques used in simulations to simulate two major AF mechanisms – spiral waves (or rotors) and point (or focal) sources. We particularly focus on 2D simulations using Nygren et al.’s mathematical model of human atrial cell. Then, we elucidate an application of the developed AF simulation to an algorithm designed for localizing AF rotors for improving current AF ablation therapies. Our simulation methods and results, along with the other discussions presented in this paper is aimed to provide engineers and professionals with a working-knowledge of application-specific simulations of spirals and foci. PMID:29629398

  6. Intercomparison of terrestrial carbon fluxes and carbon use efficiency simulated by CMIP5 Earth System Models

    NASA Astrophysics Data System (ADS)

    Kim, Dongmin; Lee, Myong-In; Jeong, Su-Jong; Im, Jungho; Cha, Dong Hyun; Lee, Sanggyun

    2017-12-01

    This study compares historical simulations of the terrestrial carbon cycle produced by 10 Earth System Models (ESMs) that participated in the fifth phase of the Coupled Model Intercomparison Project (CMIP5). Using MODIS satellite estimates, this study validates the simulation of gross primary production (GPP), net primary production (NPP), and carbon use efficiency (CUE), which depend on plant function types (PFTs). The models show noticeable deficiencies compared to the MODIS data in the simulation of the spatial patterns of GPP and NPP and large differences among the simulations, although the multi-model ensemble (MME) mean provides a realistic global mean value and spatial distributions. The larger model spreads in GPP and NPP compared to those of surface temperature and precipitation suggest that the differences among simulations in terms of the terrestrial carbon cycle are largely due to uncertainties in the parameterization of terrestrial carbon fluxes by vegetation. The models also exhibit large spatial differences in their simulated CUE values and at locations where the dominant PFT changes, primarily due to differences in the parameterizations. While the MME-simulated CUE values show a strong dependence on surface temperatures, the observed CUE values from MODIS show greater complexity, as well as non-linear sensitivity. This leads to the overall underestimation of CUE using most of the PFTs incorporated into current ESMs. The results of this comparison suggest that more careful and extensive validation is needed to improve the terrestrial carbon cycle in terms of ecosystem-level processes.

  7. Effects of Aircraft Wake Dynamics on Measured and Simulated NO(x) and HO(x) Wake Chemistry. Appendix B

    NASA Technical Reports Server (NTRS)

    Lewellen, D. C.; Lewellen, W. S.

    2001-01-01

    High-resolution numerical large-eddy simulations of the near wake of a B757 including simplified NOx and HOx chemistry were performed to explore the effects of dynamics on chemistry in wakes of ages from a few seconds to several minutes. Dilution plays an important basic role in the NOx-O3 chemistry in the wake, while a more interesting interaction between the chemistry and dynamics occurs for the HOx species. These simulation results are compared with published measurements of OH and HO2 within a B757 wake under cruise conditions in the upper troposphere taken during the Subsonic Aircraft Contrail and Cloud Effects Special Study (SUCCESS) mission in May 1996. The simulation provides a much finer grained representation of the chemistry and dynamics of the early wake than is possible from the 1 s data samples taken in situ. The comparison suggests that the previously reported discrepancy of up to a factor of 20 - 50 between the SUCCESS measurements of the [HO2]/[OH] ratio and that predicted by simplified theoretical computations is due to the combined effects of large mixing rates around the wake plume edges and averaging over volumes containing large species fluctuations. The results demonstrate the feasibility of using three-dimensional unsteady large-eddy simulations with coupled chemistry to study such phenomena.

  8. Adaptive smart simulator for characterization and MPPT construction of PV array

    NASA Astrophysics Data System (ADS)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-07-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  9. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.

    1991-01-01

    This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.

  10. FLAME: A platform for high performance computing of complex systems, applied for three case studies

    DOE PAGES

    Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...

    2011-01-01

    FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.

  11. Two-bead polarizable water models combined with a two-bead multipole force field (TMFF) for coarse-grained simulation of proteins.

    PubMed

    Li, Min; Zhang, John Z H

    2017-03-08

    The development of polarizable water models at coarse-grained (CG) levels is of much importance to CG molecular dynamics simulations of large biomolecular systems. In this work, we combined the newly developed two-bead multipole force field (TMFF) for proteins with the two-bead polarizable water models to carry out CG molecular dynamics simulations for benchmark proteins. In our simulations, two different two-bead polarizable water models are employed, the RTPW model representing five water molecules by Riniker et al. and the LTPW model representing four water molecules. The LTPW model is developed in this study based on the Martini three-bead polarizable water model. Our simulation results showed that the combination of TMFF with the LTPW model significantly stabilizes the protein's native structure in CG simulations, while the use of the RTPW model gives better agreement with all-atom simulations in predicting the residue-level fluctuation dynamics. Overall, the TMFF coupled with the two-bead polarizable water models enables one to perform an efficient and reliable CG dynamics study of the structural and functional properties of large biomolecules.

  12. Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, A. N., E-mail: tgtu-kafedra-ese@mail.ru

    2015-09-15

    Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.

  13. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  14. A demonstration of motion base design alternatives for the National Advanced Driving Simulator

    NASA Technical Reports Server (NTRS)

    Mccauley, Michael E.; Sharkey, Thomas J.; Sinacori, John B.; Laforce, Soren; Miller, James C.; Cook, Anthony

    1992-01-01

    A demonstration of the capability of NASA's Vertical Motion Simulator to simulate two alternative motion base designs for the National Advanced Driving simulator (NADS) is reported. The VMS is located at ARC. The motion base conditions used in this demonstration were as follows: (1) a large translational motion base; and (2) a motion base design with limited translational capability. The latter had translational capability representative of a typical synergistic motion platform. These alternatives were selected to test the prediction that large amplitude translational motion would result in a lower incidence or severity of simulator induced sickness (SIS) than would a limited translational motion base. A total of 10 drivers performed two tasks, slaloms and quick-stops, using each of the motion bases. Physiological, objective, and subjective measures were collected. No reliable differences in SIS between the motion base conditions was found in this demonstration. However, in light of the cost considerations and engineering challenges associated with implementing a large translation motion base, performance of a formal study is recommended.

  15. Impact of a large density gradient on linear and nonlinear edge-localized mode simulations

    DOE PAGES

    Xi, P. W.; Xu, X. Q.; Xia, T. Y.; ...

    2013-09-27

    Here, the impact of a large density gradient on edge-localized modes (ELMs) is studied linearly and nonlinearly by employing both two-fluid and gyro-fluid simulations. In two-fluid simulations, the ion diamagnetic stabilization on high-n modes disappears when the large density gradient is taken into account. But gyro-fluid simulations show that the finite Larmor radius (FLR) effect can effectively stabilize high-n modes, so the ion diamagnetic effect alone is not sufficient to represent the FLR stabilizing effect. We further demonstrate that additional gyroviscous terms must be kept in the two-fluid model to recover the linear results from the gyro-fluid model. Nonlinear simulations show that the density variation significantly weakens the E × B shearing at the top of the pedestal and thus leads to more energy loss during ELMs. The turbulence spectrum after an ELM crash is measured and has the relation ofmore » $$P(k_{z})\\propto k_{z}^{-3.3}$$ .« less

  16. Investigating wind turbine impacts on near-wake flow using profiling Lidar data and large-eddy simulations with an actuator disk model

    DOE PAGES

    Mirocha, Jeffrey D.; Rajewski, Daniel A.; Marjanovic, Nikola; ...

    2015-08-27

    In this study, wind turbine impacts on the atmospheric flow are investigated using data from the Crop Wind Energy Experiment (CWEX-11) and large-eddy simulations (LESs) utilizing a generalized actuator disk (GAD) wind turbine model. CWEX-11 employed velocity-azimuth display (VAD) data from two Doppler lidar systems to sample vertical profiles of flow parameters across the rotor depth both upstream and in the wake of an operating 1.5 MW wind turbine. Lidar and surface observations obtained during four days of July 2011 are analyzed to characterize the turbine impacts on wind speed and flow variability, and to examine the sensitivity of thesemore » changes to atmospheric stability. Significant velocity deficits (VD) are observed at the downstream location during both convective and stable portions of four diurnal cycles, with large, sustained deficits occurring during stable conditions. Variances of the streamwise velocity component, σ u, likewise show large increases downstream during both stable and unstable conditions, with stable conditions supporting sustained small increases of σ u , while convective conditions featured both larger magnitudes and increased variability, due to the large coherent structures in the background flow. Two representative case studies, one stable and one convective, are simulated using LES with a GAD model at 6 m resolution to evaluate the compatibility of the simulation framework with validation using vertically profiling lidar data in the near wake region. Virtual lidars were employed to sample the simulated flow field in a manner consistent with the VAD technique. Simulations reasonably reproduced aggregated wake VD characteristics, albeit with smaller magnitudes than observed, while σu values in the wake are more significantly underestimated. The results illuminate the limitations of using a GAD in combination with coarse model resolution in the simulation of near wake physics, and validation thereof using VAD data.« less

  17. A sequential coalescent algorithm for chromosomal inversions

    PubMed Central

    Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M

    2013-01-01

    Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894

  18. Physical properties of the HIV-1 capsid from all-atom molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perilla, Juan R.; Schulten, Klaus

    Human immunodeficiency virus type 1 (HIV-1) infection is highly dependent on its capsid. The capsid is a large container, made of B 1,300 proteins with altogether 4 million atoms. Though the capsid proteins are all identical, they nevertheless arrange themselves into a largely asymmetric structure made of hexamers and pentamers. The large number of degrees of freedom and lack of symmetry pose a challenge to studying the chemical details of the HIV capsid. Simulations of over 64 million atoms for over 1 μs allow us to conduct a comprehensive study of the chemical–physical properties of an empty HIV-1 capsid, includingmore » its electrostatics, vibrational and acoustic properties, and the effects of solvent (ions and water) on the capsid. Furthermore, the simulations reveal critical details about the capsid with implications to biological function.« less

  19. Physical properties of the HIV-1 capsid from all-atom molecular dynamics simulations

    DOE PAGES

    Perilla, Juan R.; Schulten, Klaus

    2017-07-19

    Human immunodeficiency virus type 1 (HIV-1) infection is highly dependent on its capsid. The capsid is a large container, made of B 1,300 proteins with altogether 4 million atoms. Though the capsid proteins are all identical, they nevertheless arrange themselves into a largely asymmetric structure made of hexamers and pentamers. The large number of degrees of freedom and lack of symmetry pose a challenge to studying the chemical details of the HIV capsid. Simulations of over 64 million atoms for over 1 μs allow us to conduct a comprehensive study of the chemical–physical properties of an empty HIV-1 capsid, includingmore » its electrostatics, vibrational and acoustic properties, and the effects of solvent (ions and water) on the capsid. Furthermore, the simulations reveal critical details about the capsid with implications to biological function.« less

  20. Field-scale simulation of chemical flooding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, N.

    1989-01-01

    A three-dimensional compositional chemical flooding simulator (UTCHEM) has been improved. The new mathematical formulation, boundary conditions, and a description of the physicochemical models of the simulator are presented. This improved simulator has been used for the study of the low tension pilot project at the Big Muddy field near Casper, Wyoming. Both the tracer injection conducted prior to the injection of the chemical slug, and the chemical flooding stages of the pilot project, have been analyzed. Not only the oil recovery but also the tracers, polymer, alcohol and chloride histories have been successfully matched with field results. Simulation results indicatemore » that, for this fresh water reservoir, the salinity gradient during the preflush and the resulting calcium pickup by the surfactant slug played a major role in the success of the project. In addition, analysis of the effects of the crossflow on the performance of the pilot project indicates that, for the well spacing of the pilot, crossflow does not play as important a role as it might for a large-scale project. To improve the numerical efficiency of the simulator, a third order convective differencing scheme has been applied to the simulator. This method can be used with non-uniform mesh, and therefore is suited for simulation studies of large-scale multiwell heterogeneous reservoirs. Comparison of the results with one and two dimensional analytical solutions shows that this method is effective in eliminating numerical dispersion using relatively large grid blocks. Results of one, two and three-dimensional miscible water/tracer flow, water flooding, polymer flooding, and micellar-polymer flooding test problems, and results of grid orientation studies, are presented.« less

  1. Investigating the Impact of Surface Heterogeneity on the Convective Boundary Layer Over Urban Areas Through Coupled Large-Eddy Simulation and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Dominguez, Anthony; Kleissl, Jan P.; Luvall, Jeffrey C.

    2011-01-01

    Large-eddy Simulation (LES) was used to study convective boundary layer (CBL) flow through suburban regions with both large and small scale heterogeneities in surface temperature. Constant remotely sensed surface temperatures were applied at the surface boundary at resolutions of 10 m, 90 m, 200 m, and 1 km. Increasing the surface resolution from 1 km to 200 m had the most significant impact on the mean and turbulent flow characteristics as the larger scale heterogeneities became resolved. While previous studies concluded that scales of heterogeneity much smaller than the CBL inversion height have little impact on the CBL characteristics, we found that further increasing the surface resolution (resolving smaller scale heterogeneities) results in an increase in mean surface heat flux, thermal blending height, and potential temperature profile. The results of this study will help to better inform sub-grid parameterization for meso-scale meteorological models. The simulation tool developed through this study (combining LES and high resolution remotely sensed surface conditions) is a significant step towards future studies on the micro-scale meteorology in urban areas.

  2. Higher-level simulations of turbulent flows

    NASA Technical Reports Server (NTRS)

    Ferziger, J. H.

    1981-01-01

    The fundamentals of large eddy simulation are considered and the approaches to it are compared. Subgrid scale models and the development of models for the Reynolds-averaged equations are discussed as well as the use of full simulation in testing these models. Numerical methods used in simulating large eddies, the simulation of homogeneous flows, and results from full and large scale eddy simulations of such flows are examined. Free shear flows are considered with emphasis on the mixing layer and wake simulation. Wall-bounded flow (channel flow) and recent work on the boundary layer are also discussed. Applications of large eddy simulation and full simulation in meteorological and environmental contexts are included along with a look at the direction in which work is proceeding and what can be expected from higher-level simulation in the future.

  3. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  4. Unstructured LES of Reacting Multiphase Flows in Realistic Gas Turbine Combustors

    NASA Technical Reports Server (NTRS)

    Ham, Frank; Apte, Sourabh; Iaccarino, Gianluca; Wu, Xiao-Hua; Herrmann, Marcus; Constantinescu, George; Mahesh, Krishnan; Moin, Parviz

    2003-01-01

    As part of the Accelerated Strategic Computing Initiative (ASCI) program, an accurate and robust simulation tool is being developed to perform high-fidelity LES studies of multiphase, multiscale turbulent reacting flows in aircraft gas turbine combustor configurations using hybrid unstructured grids. In the combustor, pressurized gas from the upstream compressor is reacted with atomized liquid fuel to produce the combustion products that drive the downstream turbine. The Large Eddy Simulation (LES) approach is used to simulate the combustor because of its demonstrated superiority over RANS in predicting turbulent mixing, which is central to combustion. This paper summarizes the accomplishments of the combustor group over the past year, concentrating mainly on the two major milestones achieved this year: 1) Large scale simulation: A major rewrite and redesign of the flagship unstructured LES code has allowed the group to perform large eddy simulations of the complete combustor geometry (all 18 injectors) with over 100 million control volumes; 2) Multi-physics simulation in complex geometry: The first multi-physics simulations including fuel spray breakup, coalescence, evaporation, and combustion are now being performed in a single periodic sector (1/18th) of an actual Pratt & Whitney combustor geometry.

  5. Static tool influence function for fabrication simulation of hexagonal mirror segments for extremely large telescopes.

    PubMed

    Kim, Dae Wook; Kim, Sug-Whan

    2005-02-07

    We present a novel simulation technique that offers efficient mass fabrication strategies for 2m class hexagonal mirror segments of extremely large telescopes. As the first of two studies in series, we establish the theoretical basis of the tool influence function (TIF) for precessing tool polishing simulation for non-rotating workpieces. These theoretical TIFs were then used to confirm the reproducibility of the material removal foot-prints (measured TIFs) of the bulged precessing tooling reported elsewhere. This is followed by the reverse-computation technique that traces, employing the simplex search method, the real polishing pressure from the empirical TIF. The technical details, together with the results and implications described here, provide the theoretical tool for material removal essential to the successful polishing simulation which will be reported in the second study.

  6. Validating empirical force fields for molecular-level simulation of cellulose dissolution

    USDA-ARS?s Scientific Manuscript database

    The calculations presented here, which include dynamics simulations using analytical force fields and first principles studies, indicate that the COMPASS force field is preferred over the Dreiding and Universal force fields for studying dissolution of large cellulose structures. The validity of thes...

  7. DIF Analysis with Multilevel Data: A Simulation Study Using the Latent Variable Approach

    ERIC Educational Resources Information Center

    Jin, Ying; Eason, Hershel

    2016-01-01

    The effects of mean ability difference (MAD) and short tests on the performance of various DIF methods have been studied extensively in previous simulation studies. Their effects, however, have not been studied under multilevel data structure. MAD was frequently observed in large-scale cross-country comparison studies where the primary sampling…

  8. Large-scale particle acceleration by magnetic reconnection during solar flares

    NASA Astrophysics Data System (ADS)

    Li, X.; Guo, F.; Li, H.; Li, G.; Li, S.

    2017-12-01

    Magnetic reconnection that triggers explosive magnetic energy release has been widely invoked to explain the large-scale particle acceleration during solar flares. While great efforts have been spent in studying the acceleration mechanism in small-scale kinetic simulations, there have been rare studies that make predictions to acceleration in the large scale comparable to the flare reconnection region. Here we present a new arrangement to study this problem. We solve the large-scale energetic-particle transport equation in the fluid velocity and magnetic fields from high-Lundquist-number MHD simulations of reconnection layers. This approach is based on examining the dominant acceleration mechanism and pitch-angle scattering in kinetic simulations. Due to the fluid compression in reconnection outflows and merging magnetic islands, particles are accelerated to high energies and develop power-law energy distributions. We find that the acceleration efficiency and power-law index depend critically on upstream plasma beta and the magnitude of guide field (the magnetic field component perpendicular to the reconnecting component) as they influence the compressibility of the reconnection layer. We also find that the accelerated high-energy particles are mostly concentrated in large magnetic islands, making the islands a source of energetic particles and high-energy emissions. These findings may provide explanations for acceleration process in large-scale magnetic reconnection during solar flares and the temporal and spatial emission properties observed in different flare events.

  9. Molecular dynamics simulations with electronic stopping can reproduce experimental sputtering yields of metals impacted by large cluster ions

    NASA Astrophysics Data System (ADS)

    Tian, Jiting; Zhou, Wei; Feng, Qijie; Zheng, Jian

    2018-03-01

    An unsolved problem in research of sputtering from metals induced by energetic large cluster ions is that molecular dynamics (MD) simulations often produce sputtering yields much higher than experimental results. Different from the previous simulations considering only elastic atomic interactions (nuclear stopping), here we incorporate inelastic electrons-atoms interactions (electronic stopping, ES) into MD simulations using a friction model. In this way we have simulated continuous 45° impacts of 10-20 keV C60 on a Ag(111) surface, and found that the calculated sputtering yields can be very close to the experimental results when the model parameter is appropriately assigned. Conversely, when we ignore the effect of ES, the yields are much higher, just like the previous studies. We further expand our research to the sputtering of Au induced by continuous keV C60 or Ar100 bombardments, and obtain quite similar results. Our study indicates that the gap between the experimental and the simulated sputtering yields is probably induced by the ignorance of ES in the simulations, and that a careful treatment of this issue is important for simulations of cluster-ion-induced sputtering, especially for those aiming to compare with experiments.

  10. Effectiveness of patient simulation in nursing education: meta-analysis.

    PubMed

    Shin, Sujin; Park, Jin-Hwa; Kim, Jung-Hee

    2015-01-01

    The use of simulation as an educational tool is becoming increasingly prevalent in nursing education, and a variety of simulators are utilized. Based on the results of these studies, nursing facilitators must find ways to promote effective learning among students in clinical practice and classrooms. To identify the best available evidence about the effects of patient simulation in nursing education through a meta-analysis. This study explores quantitative evidence published in the electronic databases: EBSCO, Medline, ScienceDirect, and ERIC. Using a search strategy, we identified 2503 potentially relevant articles. Twenty studies were included in the final analysis. We found significant post-intervention improvements in various domains for participants who received simulation education compared to the control groups, with a pooled random-effects standardized mean difference of 0.71, which is a medium-to-large effect size. In the subgroup analysis, we found that simulation education in nursing had benefits, in terms of effect sizes, when the effects were evaluated through performance, the evaluation outcome was psychomotor skills, the subject of learning was clinical, learners were clinical nurses and senior undergraduate nursing students, and simulators were high fidelity. These results indicate that simulation education demonstrated medium to large effect sizes and could guide nurse educators with regard to the conditions under which patient simulation is more effective than traditional learning methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. The Universe at Moderate Redshift

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Ostriker, Jeremiah P.

    1997-01-01

    The report covers the work done in the past year and a wide range of fields including properties of clusters of galaxies; topological properties of galaxy distributions in terms of galaxy types; patterns of gravitational nonlinear clustering process; development of a ray tracing algorithm to study the gravitational lensing phenomenon by galaxies, clusters and large-scale structure, one of whose applications being the effects of weak gravitational lensing by large-scale structure on the determination of q(0); the origin of magnetic fields on the galactic and cluster scales; the topological properties of Ly(alpha) clouds the Ly(alpha) optical depth distribution; clustering properties of Ly(alpha) clouds; and a determination (lower bound) of Omega(b) based on the observed Ly(alpha) forest flux distribution. In the coming year, we plan to continue the investigation of Ly(alpha) clouds using larger dynamic range (about a factor of two) and better simulations (with more input physics included) than what we have now. We will study the properties of galaxies on 1 - 100h(sup -1) Mpc scales using our state-of-the-art large scale galaxy formation simulations of various cosmological models, which will have a resolution about a factor of 5 (in each dimension) better than our current, best simulations. We will plan to study the properties of X-ray clusters using unprecedented, very high dynamic range (20,000) simulations which will enable us to resolve the cores of clusters while keeping the simulation volume sufficiently large to ensure a statistically fair sample of the objects of interest. The details of the last year's works are now described.

  12. Atmospheric stability effects on wind farm performance using large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Archer, C. L.; Ghaisas, N.; Xie, S.

    2014-12-01

    Atmospheric stability has been recently found to have significant impacts on wind farm performance, especially since offshore and onshore wind farms are known to operate often under non-neutral conditions. Recent field observations have revealed that changes in stability are accompanied by changes in wind speed, direction, and turbulent kinetic energy (TKE). In order to isolate the effects of stability, large-eddy simulations (LES) are performed under neutral, stable, and unstable conditions, keeping the wind speed and direction unchanged at a fixed height. The Lillgrund wind farm, comprising of 48 turbines, is studied in this research with the Simulator for Offshore/Onshore Wind Farm Applications (SOWFA) developed by the National Renewable Energy Laboratory. Unlike most previous numerical simulations, this study does not impose periodic boundary conditions and therefore is ideal for evaluating the effects of stability in large, but finite, wind farms. Changes in power generation, velocity deficit, rate of wake recovery, TKE, and surface temperature are quantified as a function of atmospheric stability. The sensitivity of these results to wind direction is also discussed.

  13. Study of Near-Surface Models in Large-Eddy Simulations of a Neutrally Stratified Atmospheric Boundary Layer

    NASA Technical Reports Server (NTRS)

    Senocak, I.; Ackerman, A. S.; Kirkpatrick, M. P.; Stevens, D. E.; Mansour, N. N.

    2004-01-01

    Large-eddy simulation (LES) is a widely used technique in armospheric modeling research. In LES, large, unsteady, three dimensional structures are resolved and small structures that are not resolved on the computational grid are modeled. A filtering operation is applied to distinguish between resolved and unresolved scales. We present two near-surface models that have found use in atmospheric modeling. We also suggest a simpler eddy viscosity model that adopts Prandtl's mixing length model (Prandtl 1925) in the vicinity of the surface and blends with the dynamic Smagotinsky model (Germano et al, 1991) away from the surface. We evaluate the performance of these surface models by simulating a neutraly stratified atmospheric boundary layer.

  14. Simulation of turbulent shear flows at Stanford and NASA-Ames - What can we do and what have we learned?

    NASA Technical Reports Server (NTRS)

    Reynolds, W. C.

    1983-01-01

    The capabilities and limitations of large eddy simulation (LES) and full turbulence simulation (FTS) are outlined. It is pointed out that LES, although limited at the present time by the need for periodic boundary conditions, produces large-scale flow behavior in general agreement with experiments. What is more, FTS computations produce small-scale behavior that is consistent with available experiments. The importance of the development work being done on the National Aerodynamic Simulator is emphasized. Studies at present are limited to situations in which periodic boundary conditions can be applied on boundaries of the computational domain where the flow is turbulent.

  15. Adaptive smart simulator for characterization and MPPT construction of PV array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouada, Mehdi, E-mail: mehdi.ouada@univ-annaba.org; Meridjet, Mohamed Salah; Dib, Djalel

    2016-07-25

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and hasmore » a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.« less

  16. Simulation study of interactions of Space Shuttle-generated electron beams with ambient plasmas

    NASA Technical Reports Server (NTRS)

    Lin, Chin S.

    1992-01-01

    This report summarizes results obtained through the support of NASA Grant NAGW-1936. The objective of this report is to conduct large scale simulations of electron beams injected into space. The topics covered include the following: (1) simulation of radial expansion of an injected electron beam; (2) simulations of the active injections of electron beams; (3) parameter study of electron beam injection into an ionospheric plasma; and (4) magnetosheath-ionospheric plasma interactions in the cusp.

  17. Experiment-scale molecular simulation study of liquid crystal thin films

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Dac; Carrillo, Jan-Michael Y.; Matheson, Michael A.; Brown, W. Michael

    2014-03-01

    Supercomputers have now reached a performance level adequate for studying thin films with molecular detail at the relevant scales. By exploiting the power of GPU accelerators on Titan, we have been able to perform simulations of characteristic liquid crystal films that provide remarkable qualitative agreement with experimental images. We have demonstrated that key features of spinodal instability can only be observed with sufficiently large system sizes, which were not accessible with previous simulation studies. Our study emphasizes the capability and significance of petascale simulations in providing molecular-level insights in thin film systems as well as other interfacial phenomena.

  18. Development of Residential Prototype Building Models and Analysis System for Large-Scale Energy Efficiency Studies Using EnergyPlus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendon, Vrushali V.; Taylor, Zachary T.

    ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype buildingmore » models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.« less

  19. Analysis of the Influence of Construction Insulation Systems on Public Safety in China

    PubMed Central

    Zhang, Guowei; Zhu, Guoqing; Zhao, Guoxiang

    2016-01-01

    With the Government of China’s proposed Energy Efficiency Regulations (GB40411-2007), the implementation of external insulation systems will be mandatory in China. The frequent external insulation system fires cause huge numbers of casualties and extensive property damage and have rapidly become a new hot issue in construction evacuation safety in China. This study attempts to reconstruct an actual fire scene and propose a quantitative risk assessment method for upward insulation system fires using thermal analysis tests and large eddy simulations (using the Fire Dynamics Simulator (FDS) software). Firstly, the pyrolysis and combustion characteristics of Extruded polystyrene board (XPS panel), such as ignition temperature, combustion heat, limiting oxygen index, thermogravimetric analysis and thermal radiation analysis were studied experimentally. Based on these experimental data, large eddy simulation was then applied to reconstruct insulation system fires. The results show that upward insulation system fires could be accurately reconstructed by using thermal analysis test and large eddy simulation. The spread of insulation material system fires in the vertical direction is faster than that in the horizontal direction. Moreover, we also find that there is a possibility of flashover in enclosures caused by insulation system fires as the smoke temperature exceeds 600 °C. The simulation methods and experimental results obtained in this paper could provide valuable references for fire evacuation, hazard assessment and fire resistant construction design studies. PMID:27589774

  20. Analysis of the Influence of Construction Insulation Systems on Public Safety in China.

    PubMed

    Zhang, Guowei; Zhu, Guoqing; Zhao, Guoxiang

    2016-08-30

    With the Government of China's proposed Energy Efficiency Regulations (GB40411-2007), the implementation of external insulation systems will be mandatory in China. The frequent external insulation system fires cause huge numbers of casualties and extensive property damage and have rapidly become a new hot issue in construction evacuation safety in China. This study attempts to reconstruct an actual fire scene and propose a quantitative risk assessment method for upward insulation system fires using thermal analysis tests and large eddy simulations (using the Fire Dynamics Simulator (FDS) software). Firstly, the pyrolysis and combustion characteristics of Extruded polystyrene board (XPS panel), such as ignition temperature, combustion heat, limiting oxygen index, thermogravimetric analysis and thermal radiation analysis were studied experimentally. Based on these experimental data, large eddy simulation was then applied to reconstruct insulation system fires. The results show that upward insulation system fires could be accurately reconstructed by using thermal analysis test and large eddy simulation. The spread of insulation material system fires in the vertical direction is faster than that in the horizontal direction. Moreover, we also find that there is a possibility of flashover in enclosures caused by insulation system fires as the smoke temperature exceeds 600 °C. The simulation methods and experimental results obtained in this paper could provide valuable references for fire evacuation, hazard assessment and fire resistant construction design studies.

  1. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performingmore » this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.« less

  2. RACORO continental boundary layer cloud investigations. 2. Large-eddy simulations of cumulus clouds and evaluation with in-situ and ground-based observations

    DOE PAGES

    Endo, Satoshi; Fridlind, Ann M.; Lin, Wuyin; ...

    2015-06-19

    A 60-hour case study of continental boundary layer cumulus clouds is examined using two large-eddy simulation (LES) models. The case is based on observations obtained during the RACORO Campaign (Routine Atmospheric Radiation Measurement [ARM] Aerial Facility [AAF] Clouds with Low Optical Water Depths [CLOWD] Optical Radiative Observations) at the ARM Climate Research Facility's Southern Great Plains site. The LES models are driven by continuous large-scale and surface forcings, and are constrained by multi-modal and temporally varying aerosol number size distribution profiles derived from aircraft observations. We compare simulated cloud macrophysical and microphysical properties with ground-based remote sensing and aircraft observations.more » The LES simulations capture the observed transitions of the evolving cumulus-topped boundary layers during the three daytime periods, and generally reproduce variations of droplet number concentration with liquid water content (LWC), corresponding to the gradient between the cloud centers and cloud edges at given heights. The observed LWC values fall within the range of simulated values; the observed droplet number concentrations are commonly higher than simulated, but differences remain on par with potential estimation errors in the aircraft measurements. Sensitivity studies examine the influences of bin microphysics versus bulk microphysics, aerosol advection, supersaturation treatment, and aerosol hygroscopicity. Simulated macrophysical cloud properties are found to be insensitive in this non-precipitating case, but microphysical properties are especially sensitive to bulk microphysics supersaturation treatment and aerosol hygroscopicity.« less

  3. Use of Direct Dynamics Simulations to Determine Unimolecular Reaction Paths and Arrhenius Parameters for Large Molecules.

    PubMed

    Yang, Li; Sun, Rui; Hase, William L

    2011-11-08

    In a previous study (J. Chem. Phys.2008, 129, 094701) it was shown that for a large molecule, with a total energy much greater than its barrier for decomposition and whose vibrational modes are harmonic oscillators, the expressions for the classical Rice-Ramsperger-Kassel-Marcus (RRKM) (i.e., RRK) and classical transition-state theory (TST) rate constants become equivalent. Using this relationship, a molecule's unimolecular rate constants versus temperature may be determined from chemical dynamics simulations of microcanonical ensembles for the molecule at different total energies. The simulation identifies the molecule's unimolecular pathways and their Arrhenius parameters. In the work presented here, this approach is used to study the thermal decomposition of CH3-NH-CH═CH-CH3, an important constituent in the polymer of cross-linked epoxy resins. Direct dynamics simulations, at the MP2/6-31+G* level of theory, were used to investigate the decomposition of microcanonical ensembles for this molecule. The Arrhenius A and Ea parameters determined from the direct dynamics simulation are in very good agreement with the TST Arrhenius parameters for the MP2/6-31+G* potential energy surface. The simulation method applied here may be particularly useful for large molecules with a multitude of decomposition pathways and whose transition states may be difficult to determine and have structures that are not readily obvious.

  4. Effectiveness of online simulation training: Measuring faculty knowledge, perceptions, and intention to adopt.

    PubMed

    Kim, Sujeong; Park, Chang; O'Rourke, Jennifer

    2017-04-01

    Best practice standards of simulation recommend standardized simulation training for nursing faculty. Online training may offer an effective and more widely available alternative to in-person training. Using the Theory of Planned Behavior, this study evaluated the effectiveness of an online simulation training program, examining faculty's foundational knowledge of simulation as well as perceptions and intention to adopt. One-group pretest-posttest design. A large school of nursing with a main campus and five regional campuses in the Midwestern United States. Convenience sample of 52 faculty participants. Knowledge of foundational simulation principles was measured by pre/post-training module quizzes. Perceptions and the intention to adopt simulation were measured using the Faculty Attitudes and Intent to Use Related to the Human Patient Simulator questionnaire. There was a significant improvement in faculty knowledge after training and observable improvements in attitudes. Attitudes significantly influenced the intention to adopt simulation (B=2.54, p<0.001). Online simulation training provides an effective alternative for training large numbers of nursing faculty who seek to implement best practice of standards within their institutions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Shock-induced transformations in crystalline RDX: a uniaxial constant-stress Hugoniostat molecular dynamics simulation study.

    PubMed

    Bedrov, Dmitry; Hooper, Justin B; Smith, Grant D; Sewell, Thomas D

    2009-07-21

    Molecular dynamics (MD) simulations of uniaxial shock compression along the [100] and [001] directions in the alpha polymorph of hexahydro-1,3,5-trinitro-1,3,5-triazine (alpha-RDX) have been conducted over a wide range of shock pressures using the uniaxial constant stress Hugoniostat method [Ravelo et al., Phys. Rev. B 70, 014103 (2004)]. We demonstrate that the Hugoniostat method is suitable for studying shock compression in atomic-scale models of energetic materials without the necessity to consider the extremely large simulation cells required for an explicit shock wave simulation. Specifically, direct comparison of results obtained using the Hugoniostat approach to those reported by Thompson and co-workers [Phys. Rev. B 78, 014107 (2008)] based on large-scale MD simulations of shocks using the shock front absorbing boundary condition (SFABC) approach indicates that Hugoniostat simulations of systems containing several thousand molecules reproduced the salient features observed in the SFABC simulations involving roughly a quarter-million molecules, namely, nucleation and growth of nanoscale shear bands for shocks propagating along the [100] direction and the polymorphic alpha-gamma phase transition for shocks directed along the [001] direction. The Hugoniostat simulations yielded predictions of the Hugoniot elastic limit for the [100] shock direction consistent with SFABC simulation results.

  6. Using high-fidelity computational fluid dynamics to help design a wind turbine wake measurement experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M.; Wang, Q.; Scholbrock, A.

    Here, we describe the process of using large-eddy simulations of wind turbine wake flow to help design a wake measurement campaign. The main goal of the experiment is to measure wakes and wake deflection that result from intentional yaw misalignment under a variety of atmospheric conditions at the Scaled Wind Farm Technology facility operated by Sandia National Laboratories in Lubbock, Texas. Prior simulation studies have shown that wake deflection may be used for wind-plant control that maximizes plant power output. In this study, simulations are performed to characterize wake deflection and general behavior before the experiment is performed to ensuremore » better upfront planning. Beyond characterizing the expected wake behavior, we also use the large-eddy simulation to test a virtual version of the lidar we plan to use to measure the wake and better understand our lidar scan strategy options. This work is an excellent example of a 'simulation-in-the-loop' measurement campaign.« less

  7. Using High-Fidelity Computational Fluid Dynamics to Help Design a Wind Turbine Wake Measurement Experiment

    NASA Astrophysics Data System (ADS)

    Churchfield, M.; Wang, Q.; Scholbrock, A.; Herges, T.; Mikkelsen, T.; Sjöholm, M.

    2016-09-01

    We describe the process of using large-eddy simulations of wind turbine wake flow to help design a wake measurement campaign. The main goal of the experiment is to measure wakes and wake deflection that result from intentional yaw misalignment under a variety of atmospheric conditions at the Scaled Wind Farm Technology facility operated by Sandia National Laboratories in Lubbock, Texas. Prior simulation studies have shown that wake deflection may be used for wind-plant control that maximizes plant power output. In this study, simulations are performed to characterize wake deflection and general behavior before the experiment is performed to ensure better upfront planning. Beyond characterizing the expected wake behavior, we also use the large-eddy simulation to test a virtual version of the lidar we plan to use to measure the wake and better understand our lidar scan strategy options. This work is an excellent example of a “simulation-in-the-loop” measurement campaign.

  8. Using high-fidelity computational fluid dynamics to help design a wind turbine wake measurement experiment

    DOE PAGES

    Churchfield, M.; Wang, Q.; Scholbrock, A.; ...

    2016-10-03

    Here, we describe the process of using large-eddy simulations of wind turbine wake flow to help design a wake measurement campaign. The main goal of the experiment is to measure wakes and wake deflection that result from intentional yaw misalignment under a variety of atmospheric conditions at the Scaled Wind Farm Technology facility operated by Sandia National Laboratories in Lubbock, Texas. Prior simulation studies have shown that wake deflection may be used for wind-plant control that maximizes plant power output. In this study, simulations are performed to characterize wake deflection and general behavior before the experiment is performed to ensuremore » better upfront planning. Beyond characterizing the expected wake behavior, we also use the large-eddy simulation to test a virtual version of the lidar we plan to use to measure the wake and better understand our lidar scan strategy options. This work is an excellent example of a 'simulation-in-the-loop' measurement campaign.« less

  9. Contribution of Road Grade to the Energy Use of Modern Automobiles Across Large Datasets of Real-World Drive Cycles: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, E.; Burton, E.; Duran, A.

    Understanding the real-world power demand of modern automobiles is of critical importance to engineers using modeling and simulation to inform the intelligent design of increasingly efficient powertrains. Increased use of global positioning system (GPS) devices has made large scale data collection of vehicle speed (and associated power demand) a reality. While the availability of real-world GPS data has improved the industry's understanding of in-use vehicle power demand, relatively little attention has been paid to the incremental power requirements imposed by road grade. This analysis quantifies the incremental efficiency impacts of real-world road grade by appending high fidelity elevation profiles tomore » GPS speed traces and performing a large simulation study. Employing a large real-world dataset from the National Renewable Energy Laboratory's Transportation Secure Data Center, vehicle powertrain simulations are performed with and without road grade under five vehicle models. Aggregate results of this study suggest that road grade could be responsible for 1% to 3% of fuel use in light-duty automobiles.« less

  10. Large-Eddy Simulations of Atmospheric Flows Over Complex Terrain Using the Immersed-Boundary Method in the Weather Research and Forecasting Model

    NASA Astrophysics Data System (ADS)

    Ma, Yulong; Liu, Heping

    2017-12-01

    Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.

  11. Using a Computerized Classroom Simulation to Prepare Pre-Service Teachers

    ERIC Educational Resources Information Center

    McPherson, Rebekah; Tyler-Wood, Tandra; McEnturff Ellison, Amber; Peak, Pamela

    2011-01-01

    This study at a large midwestern university evaluated the use of a web-based simulated classroom, simSchool, with pre-service and in-service special education students, to determine if use of the simulated classroom influences students' perceptions of inclusion and teacher preparation. The project used a nonequivalent comparison group,…

  12. Impacts of different characterizations of large-scale background on simulated regional-scale ozone over the continental United States

    EPA Science Inventory

    This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boun...

  13. Field scale test of multi-dimensional flow and morphodynamic simulations used for restoration design analysis

    USGS Publications Warehouse

    McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.

  14. Simulation of a large size inductively coupled plasma generator and comparison with experimental data

    NASA Astrophysics Data System (ADS)

    Lei, Fan; Li, Xiaoping; Liu, Yanming; Liu, Donglin; Yang, Min; Yu, Yuanyuan

    2018-01-01

    A two-dimensional axisymmetric inductively coupled plasma (ICP) model with its implementation in the COMSOL (Multi-physics simulation software) platform is described. Specifically, a large size ICP generator filled with argon is simulated in this study. Distributions of the number density and temperature of electrons are obtained for various input power and pressure settings and compared. In addition, the electron trajectory distribution is obtained in simulation. Finally, using experimental data, the results from simulations are compared to assess the veracity of the two-dimensional fluid model. The purpose of this comparison is to validate the veracity of the simulation model. An approximate agreement was found (variation tendency is the same). The main reasons for the numerical magnitude discrepancies are the assumption of a Maxwellian distribution and a Druyvesteyn distribution for the electron energy and the lack of cross sections of collision frequencies and reaction rates for argon plasma.

  15. Computational study of noise in a large signal transduction network.

    PubMed

    Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena

    2011-06-21

    Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.

  16. Motion simulator study of longitudinal stability requirements for large delta wing transport airplanes during approach and landing with stability augmentation systems failed

    NASA Technical Reports Server (NTRS)

    Snyder, C. T.; Fry, E. B.; Drinkwater, F. J., III; Forrest, R. D.; Scott, B. C.; Benefield, T. D.

    1972-01-01

    A ground-based simulator investigation was conducted in preparation for and correlation with an-flight simulator program. The objective of these studies was to define minimum acceptable levels of static longitudinal stability for landing approach following stability augmentation systems failures. The airworthiness authorities are presently attempting to establish the requirements for civil transports with only the backup flight control system operating. Using a baseline configuration representative of a large delta wing transport, 20 different configurations, many representing negative static margins, were assessed by three research test pilots in 33 hours of piloted operation. Verification of the baseline model to be used in the TIFS experiment was provided by computed and piloted comparisons with a well-validated reference airplane simulation. Pilot comments and ratings are included, as well as preliminary tracking performance and workload data.

  17. An experimental study of the aerodynamics of a NACA 0012 airfoil with a simulated glaze ice accretion

    NASA Technical Reports Server (NTRS)

    Bragg, M. B.

    1986-01-01

    An experimental study was conducted in the Ohio State University subsonic wind tunnel to measure the detailed aerodynamic characteristics of an airfoil with a simulated glaze ice accretion. A NACA 0012 model with interchangeable leading edges and pressure taps every one percent chord was used. Surface pressure and wake data were taken on the airfoil clean, with forced transition and with a simulated glaze ice shape. Lift and drag penalties due to the ice shape were found and the surface pressure clearly showed that large separation bubbles were present. Both total pressure and split-film probes were used to measure velocity profiles, both for the clean model and for the model with a simulated ice accretion. A large region of flow separation was seen in the velocity profiles and was correlated to the pressure measurements. Clean airfoil data were found to compare well to existing airfoil analysis methods.

  18. Computational Study of Uniaxial Deformations in Silica Aerogel Using a Coarse-Grained Model.

    PubMed

    Ferreiro-Rangel, Carlos A; Gelb, Lev D

    2015-07-09

    Simulations of a flexible coarse-grained model are used to study silica aerogels. This model, introduced in a previous study (J. Phys. Chem. C 2007, 111, 15792), consists of spherical particles which interact through weak nonbonded forces and strong interparticle bonds that may form and break during the simulations. Small-deformation simulations are used to determine the elastic moduli of a wide range of material models, and large-deformation simulations are used to probe structural evolution and plastic deformation. Uniaxial deformation at constant transverse pressure is simulated using two methods: a hybrid Monte Carlo approach combining molecular dynamics for the motion of individual particles and stochastic moves for transverse stress equilibration, and isothermal molecular dynamics simulations at fixed Poisson ratio. Reasonable agreement on elastic moduli is obtained except at very low densities. The model aerogels exhibit Poisson ratios between 0.17 and 0.24, with higher-density gels clustered around 0.20, and Young's moduli that vary with aerogel density according to a power-law dependence with an exponent near 3.0. These results are in agreement with reported experimental values. The models are shown to satisfy the expected homogeneous isotropic linear-elastic relationship between bulk and Young's moduli at higher densities, but there are systematic deviations at the lowest densities. Simulations of large compressive and tensile strains indicate that these materials display a ductile-to-brittle transition as the density is increased, and that the tensile strength varies with density according to a power law, with an exponent in reasonable agreement with experiment. Auxetic behavior is observed at large tensile strains in some models. Finally, at maximum tensile stress very few broken bonds are found in the materials, in accord with the theory that only a small fraction of the material structure is actually load-bearing.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  20. A holistic approach for large-scale derived flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno

    2017-04-01

    Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.

  1. Simulation-based training for nurses: Systematic review and meta-analysis.

    PubMed

    Hegland, Pål A; Aarlie, Hege; Strømme, Hilde; Jamtvedt, Gro

    2017-07-01

    Simulation-based training is a widespread strategy to improve health-care quality. However, its effect on registered nurses has previously not been established in systematic reviews. The aim of this systematic review is to evaluate effect of simulation-based training on nurses' skills and knowledge. We searched CDSR, DARE, HTA, CENTRAL, CINAHL, MEDLINE, Embase, ERIC, and SveMed+ for randomised controlled trials (RCT) evaluating effect of simulation-based training among nurses. Searches were completed in December 2016. Two reviewers independently screened abstracts and full-text, extracted data, and assessed risk of bias. We compared simulation-based training to other learning strategies, high-fidelity simulation to other simulation strategies, and different organisation of simulation training. Data were analysed through meta-analysis and narrative syntheses. GRADE was used to assess the quality of evidence. Fifteen RCTs met the inclusion criteria. For the comparison of simulation-based training to other learning strategies on nurses' skills, six studies in the meta-analysis showed a significant, but small effect in favour of simulation (SMD -1.09, CI -1.72 to -0.47). There was large heterogeneity (I 2 85%). For the other comparisons, there was large between-study variation in results. The quality of evidence for all comparisons was graded as low. The effect of simulation-based training varies substantially between studies. Our meta-analysis showed a significant effect of simulation training compared to other learning strategies, but the quality of evidence was low indicating uncertainty. Other comparisons showed inconsistency in results. Based on our findings simulation training appears to be an effective strategy to improve nurses' skills, but further good-quality RCTs with adequate sample sizes are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. ViscoSim Earthquake Simulator

    USGS Publications Warehouse

    Pollitz, Fred

    2012-01-01

    Synthetic seismicity simulations have been explored by the Southern California Earthquake Center (SCEC) Earthquake Simulators Group in order to guide long‐term forecasting efforts related to the Unified California Earthquake Rupture Forecast (Tullis et al., 2012a). In this study I describe the viscoelastic earthquake simulator (ViscoSim) of Pollitz, 2009. Recapitulating to a large extent material previously presented by Pollitz (2009, 2011) I describe its implementation of synthetic ruptures and how it differs from other simulators being used by the group.

  3. Effect of simulation training on the development of nurses and nursing students' critical thinking: A systematic literature review.

    PubMed

    Adib-Hajbaghery, Mohsen; Sharifi, Najmeh

    2017-03-01

    To gain insight into the existing scientific evidence on the effect of simulation on critical thinking in nursing education. A systematic literature review of original research publications. In this systematic review, the papers published in English and Farsi databases of PubMed, Science Direct, ProQuest, ERIC, Google Scholar and Ovid, MagIran and SID, from 1975 to 2015 were reviewed by two independent researchers. Original research publications were eligible for review when they described simulation program directed on nursing student and nurses; used a control group or a pretest post-test design; and gave information about the effects of simulation on critical thinking. Two reviewers independently assessed the studies for inclusion. Methodological quality of the included studies was also independently assessed by the reviewers, using a checklist developed by Greenhalgh et al. and the checklist of Cochrane Center. Data related to the original publications were extracted by one reviewer and checked by a second reviewer. No statistical pooling of outcomes was performed, due to the large heterogeneity of outcomes. After screening the titles and abstracts of 787 papers, 16 ones were included in the review according to the inclusion criteria. These used experimental or quasi-experimental designs. The studies used a variety of instruments and a wide range of simulation methods with differences in duration and numbers of exposures to simulation. Eight of the studies reported that simulation training positively affected the critical thinking skills. However, eight studies reported ineffectiveness of simulation on critical thinking. Studies are conflicting about the effect of simulation on nurses and nursing students' critical thinking. Also, a large heterogeneity exists between the studies in terms of the instruments and the methods used. Thus, more studies with careful designs are needed to produce more credible evidence on the effectiveness of simulation on critical thinking. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Parallel Large-Scale Molecular Dynamics Simulation Opens New Perspective to Clarify the Effect of a Porous Structure on the Sintering Process of Ni/YSZ Multiparticles.

    PubMed

    Xu, Jingxiang; Higuchi, Yuji; Ozawa, Nobuki; Sato, Kazuhisa; Hashida, Toshiyuki; Kubo, Momoji

    2017-09-20

    Ni sintering in the Ni/YSZ porous anode of a solid oxide fuel cell changes the porous structure, leading to degradation. Preventing sintering and degradation during operation is a great challenge. Usually, a sintering molecular dynamics (MD) simulation model consisting of two particles on a substrate is used; however, the model cannot reflect the porous structure effect on sintering. In our previous study, a multi-nanoparticle sintering modeling method with tens of thousands of atoms revealed the effect of the particle framework and porosity on sintering. However, the method cannot reveal the effect of the particle size on sintering and the effect of sintering on the change in the porous structure. In the present study, we report a strategy to reveal them in the porous structure by using our multi-nanoparticle modeling method and a parallel large-scale multimillion-atom MD simulator. We used this method to investigate the effect of YSZ particle size and tortuosity on sintering and degradation in the Ni/YSZ anodes. Our parallel large-scale MD simulation showed that the sintering degree decreased as the YSZ particle size decreased. The gas fuel diffusion path, which reflects the overpotential, was blocked by pore coalescence during sintering. The degradation of gas diffusion performance increased as the YSZ particle size increased. Furthermore, the gas diffusion performance was quantified by a tortuosity parameter and an optimal YSZ particle size, which is equal to that of Ni, was found for good diffusion after sintering. These findings cannot be obtained by previous MD sintering studies with tens of thousands of atoms. The present parallel large-scale multimillion-atom MD simulation makes it possible to clarify the effects of the particle size and tortuosity on sintering and degradation.

  5. Using adaptive-mesh refinement in SCFT simulations of surfactant adsorption

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Kumar, Rajeev; Jamroz, Ben; Crockett, Robert; Pletzer, Alex

    2013-03-01

    Adsorption of surfactants at interfaces is relevant to many applications such as detergents, adhesives, emulsions and ferrofluids. Atomistic simulations of interface adsorption are challenging due to the difficulty of modeling the wide range of length scales in these problems: the thin interface region in equilibrium with a large bulk region that serves as a reservoir for the adsorbed species. Self-consistent field theory (SCFT) has been extremely useful for studying the morphologies of dense block copolymer melts. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. However, even SCFT methods can be difficult to apply to systems in which small spatial regions might require finer resolution than most of the simulation grid (eg. interface adsorption and confinement). We will present results on interface adsorption simulations using PolySwift++, an object-oriented, polymer SCFT simulation code aided by the Tech-X Chompst library that enables via block-structured AMR calculations with PETSc.

  6. Statewide mesoscopic simulation for Wyoming.

    DOT National Transportation Integrated Search

    2013-10-01

    This study developed a mesoscopic simulator which is capable of representing both city-level and statewide roadway : networks. The key feature of such models are the integration of (i) a traffic flow model which is efficient enough to : scale to larg...

  7. Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects

    NASA Technical Reports Server (NTRS)

    Makowski, David; Asseng, Senthold; Ewert, Frank; Bassu, Simona; Durand, Jean-Louis; Martre, Pierre; Adam, Myriam; Aggarwal, Pramod K.; Angulo, Carlos; Baron, Chritian; hide

    2015-01-01

    Many studies have been carried out during the last decade to study the effect of climate change on crop yields and other key crop characteristics. In these studies, one or several crop models were used to simulate crop growth and development for different climate scenarios that correspond to different projections of atmospheric CO2 concentration, temperature, and rainfall changes (Semenov et al., 1996; Tubiello and Ewert, 2002; White et al., 2011). The Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013) builds on these studies with the goal of using an ensemble of multiple crop models in order to assess effects of climate change scenarios for several crops in contrasting environments. These studies generate large datasets, including thousands of simulated crop yield data. They include series of yield values obtained by combining several crop models with different climate scenarios that are defined by several climatic variables (temperature, CO2, rainfall, etc.). Such datasets potentially provide useful information on the possible effects of different climate change scenarios on crop yields. However, it is sometimes difficult to analyze these datasets and to summarize them in a useful way due to their structural complexity; simulated yield data can differ among contrasting climate scenarios, sites, and crop models. Another issue is that it is not straightforward to extrapolate the results obtained for the scenarios to alternative climate change scenarios not initially included in the simulation protocols. Additional dynamic crop model simulations for new climate change scenarios are an option but this approach is costly, especially when a large number of crop models are used to generate the simulated data, as in AgMIP. Statistical models have been used to analyze responses of measured yield data to climate variables in past studies (Lobell et al., 2011), but the use of a statistical model to analyze yields simulated by complex process-based crop models is a rather new idea. We demonstrate herewith that statistical methods can play an important role in analyzing simulated yield data sets obtained from the ensembles of process-based crop models. Formal statistical analysis is helpful to estimate the effects of different climatic variables on yield, and to describe the between-model variability of these effects.

  8. Study of cosmic ray events with high muon multiplicity using the ALICE detector at the CERN Large Hadron Collider

    DOE PAGES

    Adam, J.

    2016-01-19

    ALICE is one of four large experiments at the CERN Large Hadron Collider near Geneva, specially designed to study particle production in ultra-relativistic heavy-ion collisions. Located 52 meters underground with 28 meters of overburden rock, it has also been used to detect muons produced by cosmic ray interactions in the upper atmosphere. Here, we present the multiplicity distribution of these atmospheric muons and its comparison with Monte Carlo simulations. Our analysis exploits the large size and excellent tracking capability of the ALICE Time Projection Chamber. A special emphasis is given to the study of high multiplicity events containing more thanmore » 100 reconstructed muons and corresponding to a muon areal density rho(mu) > 5.9 m(-2). Similar events have been studied in previous underground experiments such as ALEPH and DELPHI at LEP. While these experiments were able to reproduce the measured muon multiplicity distribution with Monte Carlo simulations at low and intermediate multiplicities, their simulations failed to describe the frequency of the highest multiplicity events. In this work we show that the high multiplicity events observed in ALICE stem from primary cosmic rays with energies above 10(16) eV and that the frequency of these events can be successfully described by assuming a heavy mass composition of primary cosmic rays in this energy range. Furthermore, the development of the resulting air showers was simulated using the latest version of QGSJET to model hadronic interactions. This observation places significant constraints on alternative, more exotic, production mechanisms for these events.« less

  9. Developing Present-day Proxy Cases Based on NARVAL Data for Investigating Low Level Cloud Responses to Future Climate Change.

    NASA Astrophysics Data System (ADS)

    Reilly, Stephanie

    2017-04-01

    The energy budget of the entire global climate is significantly influenced by the presence of boundary layer clouds. The main aim of the High Definition Clouds and Precipitation for Advancing Climate Prediction (HD(CP)2) project is to improve climate model predictions by means of process studies of clouds and precipitation. This study makes use of observed elevated moisture layers as a proxy of future changes in tropospheric humidity. The associated impact on radiative transfer triggers fast responses in boundary layer clouds, providing a framework for investigating this phenomenon. The investigation will be carried out using data gathered during the Next-generation Aircraft Remote-sensing for VALidation (NARVAL) South campaigns. Observational data will be combined with ECMWF reanalysis data to derive the large scale forcings for the Large Eddy Simulations (LES). Simulations will be generated for a range of elevated moisture layers, spanning a multi-dimensional phase space in depth, amplitude, elevation, and cloudiness. The NARVAL locations will function as anchor-points. The results of the large eddy simulations and the observations will be studied and compared in an attempt to determine how simulated boundary layer clouds react to changes in radiative transfer from the free troposphere. Preliminary LES results will be presented and discussed.

  10. Dynamic simulation of Static Var Compensators in distribution systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koessler, R.J.

    1992-08-01

    This paper is a system study guide for the correction of voltage dips due to large motor startups with Static Var Compensators (SVCs). The method utilizes time simulations, which are an important aid in the equipment design and specification. The paper illustrates the process of setting-up a computer model and performing time simulations. The study process is demonstrated through an example, the Shawnee feeder in the Niagara Mohawk Power Corporation service area.

  11. Simulating Daily and Sub-daily Water Flow in Large, Semi-arid Watershed Using SWAT: A Case Study of Nueces River Basin, Texas

    NASA Astrophysics Data System (ADS)

    Bassam, S.; Ren, J.

    2015-12-01

    Runoff generated during heavy rainfall imposes quick, but often intense, changes in the flow of streams, which increase the chance of flash floods in the vicinity of the streams. Understanding the temporal response of streams to heavy rainfall requires a hydrological model that considers meteorological, hydrological, and geological components of the streams and their watersheds. SWAT is a physically-based, semi-distributed model that is capable of simulating water flow within watersheds with both long-term, i.e. annually and monthly, and short-term (daily and sub-daily) time scales. However, the capability of SWAT in sub-daily water flow modeling within large watersheds has not been studied much, compare to long-term and daily time scales. In this study we are investigating the water flow in a large, semi-arid watershed, Nueces River Basin (NRB) with the drainage area of 16950 mi2 located in South Texas, with daily and sub-daily time scales. The objectives of this study are: (1) simulating the response of streams to heavy, and often quick, rainfall, (2) evaluating SWAT performance in sub-daily modeling of water flow within a large watershed, and (3) examining means for model performance improvement during model calibration and verification based on results of sensitivity and uncertainty analysis. The results of this study can provide important information for water resources planning during flood seasons.

  12. Lewis Research Center studies of multiple large wind turbine generators on a utility network

    NASA Technical Reports Server (NTRS)

    Gilbert, L. J.; Triezenberg, D. M.

    1979-01-01

    A NASA-Lewis program to study the anticipated performance of a wind turbine generator farm on an electric utility network is surveyed. The paper describes the approach of the Lewis Wind Energy Project Office to developing analysis capabilities in the area of wind turbine generator-utility network computer simulations. Attention is given to areas such as, the Lewis Purdue hybrid simulation, an independent stability study, DOE multiunit plant study, and the WEST simulator. Also covered are the Lewis mod-2 simulation including analog simulation of a two wind turbine system and comparison with Boeing simulation results, and gust response of a two machine model. Finally future work to be done is noted and it is concluded that the study shows little interaction between the generators and between the generators and the bus.

  13. Numerical Studies of Flow Past Two Side-by-Side Circular Cylinders

    NASA Astrophysics Data System (ADS)

    Shao, J.; Zhang, C.

    Multiple circular cylindrical configurations are widely used in engineering applications. The fluid dynamics of the flow around two identical circular cylinders in side-by-side arrangement has been investigated by both experiments and numerical simulations. The center-to-center transverse pitch ratio T/D plays an important role in determining the flow features. It is observed that for 1 < T/D < 1.1 to 1.2, a single vortex street is formed; for 1.2< T/D < 2 to 2.2, bi-stable narrow and wide wakes are formed; for 2.7< T/D < 4 or 5, anti-phase or in-phase vortex streets are formed. In the current study, the vortex structures of turbulent flows past two slightly heated side-by-side circular cylinders are investigated employing the large eddy simulation (LES). Simulations are performed using a commercial CFD software, FLUENT. The Smagorinsky-Lilly subgrid-scale model is employed for the large eddy simulation. The Reynolds number based on free-stream velocity and cylinder diameter is 5 800, which is in the subcritical regime. The transverse pitch ratio T/D = 3 is investigated. Laminar boundary layer, transition in shear layer, flow separation, large vortex structures and flow interference in the wake are all involved in the flow. Such complex flow features make the current study a challenging task. Both flow field and temperature field are investigated. The calculated results are analyzed and compared with experimental data. The simulation results are qualitatively in accordance with experimental observations. Two anti-phase vortex streets are obtained by the large-eddy simulation, which agrees with the experimental observation. At this transverse pitch ratio, these two cylinders behave as independent, isolated single cylinder in cross flow. The time-averaged streamwise velocity and temperature at x/D=10 are in good agreement with the experimental data. Figure1 displays the instantaneous spanwise vorticity at the center plane.

  14. Parameter studies on the energy balance closure problem using large-eddy simulation

    NASA Astrophysics Data System (ADS)

    De Roo, Frederik; Banerjee, Tirtha; Mauder, Matthias

    2017-04-01

    The imbalance of the surface energy budget in eddy-covariance measurements is still a pending problem. A possible cause is the presence of land surface heterogeneity. Heterogeneities of the boundary layer scale or larger are most effective in influencing the boundary layer turbulence, and large-eddy simulations have shown that secondary circulations within the boundary layer can affect the surface energy budget. However, the precise influence of the surface characteristics on the energy imbalance and its partitioning is still unknown. To investigate the influence of surface variables on all the components of the flux budget under convective conditions, we set up a systematic parameter study by means of large-eddy simulation. For the study we use a virtual control volume approach, and we focus on idealized heterogeneity by considering spatially variable surface fluxes. The surface fluxes vary locally in intensity and these patches have different length scales. The main focus lies on heterogeneities of length scales of the kilometer scale and one decade smaller. For each simulation, virtual measurement towers are positioned at functionally different positions. We discriminate between the locally homogeneous towers, located within land use patches, with respect to the more heterogeneous towers, and find, among others, that the flux-divergence and the advection are strongly linearly related within each class. Furthermore, we seek correlators for the energy balance ratio and the energy residual in the simulations. Besides the expected correlation with measurable atmospheric quantities such as the friction velocity, boundary-layer depth and temperature and moisture gradients, we have also found an unexpected correlation with the temperature difference between sonic temperature and surface temperature. In additional simulations with a large number of virtual towers, we investigate higher order correlations, which can be linked to secondary circulations. In a companion presentation (EGU2017-2130) these correlations are investigated and confirmed with the help of micrometeorological measurements from the TERENO sites where the effects of landscape scale surface heterogeneities are deemed to be important.

  15. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    NASA Astrophysics Data System (ADS)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  16. Simulating the large-scale structure of HI intensity maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 2048{sup 3} particles (particle mass 1.6 × 10{sup 11} M{sub ⊙} / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (10{sup 8} M{sub ⊙} / h < M{sub halo} < 10{sup 13} M{sub ⊙} / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 ∼< z ∼< 0.9 in redshift bins of width Δ z ≈ 0.05 andmore » cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.« less

  17. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  18. Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.

    1996-07-01

    The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.

  19. Large-eddy simulation of subtropical cloud-topped boundary layers: 1. A forcing framework with closed surface energy balance

    NASA Astrophysics Data System (ADS)

    Tan, Zhihong; Schneider, Tapio; Teixeira, João.; Pressel, Kyle G.

    2016-12-01

    Large-eddy simulation (LES) of clouds has the potential to resolve a central question in climate dynamics, namely, how subtropical marine boundary layer (MBL) clouds respond to global warming. However, large-scale processes need to be prescribed or represented parameterically in the limited-area LES domains. It is important that the representation of large-scale processes satisfies constraints such as a closed energy balance in a manner that is realizable under climate change. For example, LES with fixed sea surface temperatures usually do not close the surface energy balance, potentially leading to spurious surface fluxes and cloud responses to climate change. Here a framework of forcing LES of subtropical MBL clouds is presented that enforces a closed surface energy balance by coupling atmospheric LES to an ocean mixed layer with a sea surface temperature (SST) that depends on radiative fluxes and sensible and latent heat fluxes at the surface. A variety of subtropical MBL cloud regimes (stratocumulus, cumulus, and stratocumulus over cumulus) are simulated successfully within this framework. However, unlike in conventional frameworks with fixed SST, feedbacks between cloud cover and SST arise, which can lead to sudden transitions between cloud regimes (e.g., stratocumulus to cumulus) as forcing parameters are varied. The simulations validate this framework for studies of MBL clouds and establish its usefulness for studies of how the clouds respond to climate change.

  20. Beam-Beam Study on the Upgrade of Beijing Electron Positron Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S.; /Beijing, Inst. High Energy Phys.; Cai, Y.

    2006-02-10

    It is an important issue to study the beam-beam interaction in the design and performance of such a high luminosity collider as BEPCII, the upgrade of Beijing Electron Positron Collider. The weak-strong simulation is generally used during the design of a collider. For performance a large scale tune scan, the weak-strong simulation studies on beam-beam interaction were done, and the geometry effects were taken into account. The strong-strong simulation studies were done for investigating the luminosity goal and the dependence of the luminosity on the beam parameters.

  1. Large-Eddy Simulation of Propeller Crashback

    NASA Astrophysics Data System (ADS)

    Kumar, Praveen; Mahesh, Krishnan

    2013-11-01

    Crashback is an operating condition to quickly stop a propelled vehicle, where the propeller is rotated in the reverse direction to yield negative thrust. The crashback condition is dominated by the interaction of free stream flow with strong reverse flow. Crashback causes highly unsteady loads and flow separation on blade surface. This study uses Large-Eddy Simulation to predict the highly unsteady flow field in propeller crashback. Results are shown for a stand-alone open propeller, hull-attached open propeller and a ducted propeller. The simulations are compared to experiment, and used to discuss the essential physics behind the unsteady loads. This work is supported by the Office of Naval Research.

  2. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2016-01-05

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  3. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  4. Acceleration techniques for dependability simulation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Barnette, James David

    1995-01-01

    As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  6. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE PAGES

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia; ...

    2017-08-01

    This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less

  7. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia

    This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less

  8. Numerical Modeling Studies of Wake Vortices: Real Case Simulations

    NASA Technical Reports Server (NTRS)

    Shen, Shao-Hua; Ding, Feng; Han, Jongil; Lin, Yuh-Lang; Arya, S. Pal; Proctor, Fred H.

    1999-01-01

    A three-dimensional large-eddy simulation model, TASS, is used to simulate the behavior of aircraft wake vortices in a real atmosphere. The purpose for this study is to validate the use of TASS for simulating the decay and transport of wake vortices. Three simulations are performed and the results are compared with the observed data from the 1994-1995 Memphis field experiments. The selected cases have an atmospheric environment of weak turbulence and stable stratification. The model simulations are initialized with appropriate meteorological conditions and a post roll-up vortex system. The behavior of wake vortices as they descend within the atmospheric boundary layer and interact with the ground is discussed.

  9. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water

    PubMed Central

    Wu, Xiongwu; Brooks, Bernard R.

    2015-01-01

    Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66’s pKa. PMID:26506245

  10. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water.

    PubMed

    Wu, Xiongwu; Brooks, Bernard R

    2015-10-01

    Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66's pKa.

  11. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    PubMed

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  12. Large-Eddy Simulation of Wind-Plant Aerodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology formore » performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.« less

  13. Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive

    NASA Astrophysics Data System (ADS)

    Najjar, F. M.; Howard, W. M.; Fried, L. E.; Manaa, M. R.; Nichols, A., III; Levesque, G.

    2012-03-01

    High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating hightemperature regions leading to ignition. A hydrodynamic study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing a multiphysics code, ALE3D, coupled to the chemistry module, Cheetah. This computational study includes reactive dynamics. Two-dimensional high-resolution large-scale meso-scale simulations have been performed. The parameter space is systematically studied by considering various shock strengths, pore diameters and multiple pore configurations. Preliminary 3-D simulations are undertaken to quantify the 3-D dynamics.

  14. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  15. Efficiency of the neighbor-joining method in reconstructing deep and shallow evolutionary relationships in large phylogenies.

    PubMed

    Kumar, S; Gadagkar, S R

    2000-12-01

    The neighbor-joining (NJ) method is widely used in reconstructing large phylogenies because of its computational speed and the high accuracy in phylogenetic inference as revealed in computer simulation studies. However, most computer simulation studies have quantified the overall performance of the NJ method in terms of the percentage of branches inferred correctly or the percentage of replications in which the correct tree is recovered. We have examined other aspects of its performance, such as the relative efficiency in correctly reconstructing shallow (close to the external branches of the tree) and deep branches in large phylogenies; the contribution of zero-length branches to topological errors in the inferred trees; and the influence of increasing the tree size (number of sequences), evolutionary rate, and sequence length on the efficiency of the NJ method. Results show that the correct reconstruction of deep branches is no more difficult than that of shallower branches. The presence of zero-length branches in realized trees contributes significantly to the overall error observed in the NJ tree, especially in large phylogenies or slowly evolving genes. Furthermore, the tree size does not influence the efficiency of NJ in reconstructing shallow and deep branches in our simulation study, in which the evolutionary process is assumed to be homogeneous in all lineages.

  16. Characterising large-scale structure with the REFLEX II cluster survey

    NASA Astrophysics Data System (ADS)

    Chon, Gayoung

    2016-10-01

    We study the large-scale structure with superclusters from the REFLEX X-ray cluster survey together with cosmological N-body simulations. It is important to construct superclusters with criteria such that they are homogeneous in their properties. We lay out our theoretical concept considering future evolution of superclusters in their definition, and show that the X-ray luminosity and halo mass functions of clusters in superclusters are found to be top-heavy, different from those of clusters in the field. We also show a promising aspect of using superclusters to study the local cluster bias and mass scaling relation with simulations.

  17. An evaluation of edge effects in nutritional accessibility and availability measures: a simulation study

    PubMed Central

    2010-01-01

    Background This paper addresses the statistical use of accessibility and availability indices and the effect of study boundaries on these measures. The measures are evaluated via an extensive simulation based on cluster models for local outlet density. We define outlet to mean either food retail store (convenience store, supermarket, gas station) or restaurant (limited service or full service restaurants). We designed a simulation whereby a cluster outlet model is assumed in a large study window and an internal subset of that window is constructed. We performed simulations on various criteria including one scenario representing an urban area with 2000 outlets as well as a non-urban area simulated with only 300 outlets. A comparison is made between estimates obtained with the full study area and estimates using only the subset area. This allows the study of the effect of edge censoring on accessibility measures. Results The results suggest that considerable bias is found at the edges of study regions in particular for accessibility measures. Edge effects are smaller for availability measures (when not smoothed) and also for short range accessibility Conclusions It is recommended that any study utilizing these measures should correct for edge effects. The use of edge correction via guard areas is recommended and the avoidance of large range distance-based accessibility measures is also proposed. PMID:20663199

  18. The topology of large-scale structure. VI - Slices of the universe

    NASA Astrophysics Data System (ADS)

    Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.

    1992-03-01

    Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.

  19. The topology of large-scale structure. VI - Slices of the universe

    NASA Technical Reports Server (NTRS)

    Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.

    1992-01-01

    Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.

  20. Land surface modeling in convection permitting simulations

    NASA Astrophysics Data System (ADS)

    van Heerwaarden, Chiel; Benedict, Imme

    2017-04-01

    The next generation of weather and climate models permits convection, albeit at a grid spacing that is not sufficient to resolve all details of the clouds. Whereas much attention is being devoted to the correct simulation of convective clouds and associated precipitation, the role of the land surface has received far less interest. In our view, convective permitting simulations pose a set of problems that need to be solved before accurate weather and climate prediction is possible. The heart of the problem lies at the direct runoff and at the nonlinearity of the surface stress as a function of soil moisture. In coarse resolution simulations, where convection is not permitted, precipitation that reaches the land surface is uniformly distributed over the grid cell. Subsequently, a fraction of this precipitation is intercepted by vegetation or leaves the grid cell via direct runoff, whereas the remainder infiltrates into the soil. As soon as we move to convection permitting simulations, this precipitation falls often locally in large amounts. If the same land-surface model is used as in simulations with parameterized convection, this leads to an increase in direct runoff. Furthermore, spatially non-uniform infiltration leads to a very different surface stress, when scaled up to the course resolution of simulations without convection. Based on large-eddy simulation of realistic convection events at a large domain, this study presents a quantification of the errors made at the land surface in convection permitting simulation. It compares the magnitude of the errors to those made in the convection itself due to the coarse resolution of the simulation. We find that, convection permitting simulations have less evaporation than simulations with parameterized convection, resulting in a non-realistic drying of the atmosphere. We present solutions to resolve this problem.

  1. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  2. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data.

    PubMed

    Ikegami, Takashi; Mototake, Yoh-Ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-12-28

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  3. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data

    NASA Astrophysics Data System (ADS)

    Ikegami, Takashi; Mototake, Yoh-ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-11-01

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  4. [Research on adaptive quasi-linear viscoelastic model for nonlinear viscoelastic properties of in vivo soft tissues].

    PubMed

    Wang, Heng; Sang, Yuanjun

    2017-10-01

    The mechanical behavior modeling of human soft biological tissues is a key issue for a large number of medical applications, such as surgery simulation, surgery planning, diagnosis, etc. To develop a biomechanical model of human soft tissues under large deformation for surgery simulation, the adaptive quasi-linear viscoelastic (AQLV) model was proposed and applied in human forearm soft tissues by indentation tests. An incremental ramp-and-hold test was carried out to calibrate the model parameters. To verify the predictive ability of the AQLV model, the incremental ramp-and-hold test, a single large amplitude ramp-and-hold test and a sinusoidal cyclic test at large strain amplitude were adopted in this study. Results showed that the AQLV model could predict the test results under the three kinds of load conditions. It is concluded that the AQLV model is feasible to describe the nonlinear viscoelastic properties of in vivo soft tissues under large deformation. It is promising that this model can be selected as one of the soft tissues models in the software design for surgery simulation or diagnosis.

  5. Analysis of near-surface relative humidity in a wind turbine array boundary layer using an instrumented unmanned aerial system and large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Adkins, Kevin; Elfajri, Oumnia; Sescu, Adrian

    2016-11-01

    Simulation and modeling have shown that wind farms have an impact on the near-surface atmospheric boundary layer (ABL) as turbulent wakes generated by the turbines enhance vertical mixing. These changes alter downstream atmospheric properties. With a large portion of wind farms hosted within an agricultural context, changes to the environment can potentially have secondary impacts such as to the productivity of crops. With the exception of a few observational data sets that focus on the impact to near-surface temperature, little to no observational evidence exists. These few studies also lack high spatial resolution due to their use of a limited number of meteorological towers or remote sensing techniques. This study utilizes an instrumented small unmanned aerial system (sUAS) to gather in-situ field measurements from two Midwest wind farms, focusing on the impact that large utility-scale wind turbines have on relative humidity. Results are also compared to numerical experiments conducted using large eddy simulation (LES). Wind turbines are found to differentially alter the relative humidity in the downstream, spanwise and vertical directions under a variety of atmospheric stability conditions.

  6. Pulsar simulations for the Fermi Large Area Telescope

    DOE PAGES

    Razzano, M.; Harding, Alice K.; Baldini, L.; ...

    2009-05-21

    Pulsars are among the prime targets for the Large Area Telescope (LAT) aboard the recently launched Fermi observatory. The LAT will study the gamma-ray Universe between 20 MeV and 300 GeV with unprecedented detail. Increasing numbers of gamma-ray pulsars are being firmly identified, yet their emission mechanisms are far from being understood. To better investigate and exploit the LAT capabilities for pulsar science, a set of new detailed pulsar simulation tools have been developed within the LAT collaboration. The structure of the pulsar simulator package ( PulsarSpectrum) is presented here. Starting from photon distributions in energy and phase obtained frommore » theoretical calculations or phenomenological considerations, gamma-rays are generated and their arrival times at the spacecraft are determined by taking into account effects such as barycentric effects and timing noise. Pulsars in binary systems also can be simulated given orbital parameters. As a result, we present how simulations can be used for generating a realistic set of gamma-rays as observed by the LAT, focusing on some case studies that show the performance of the LAT for pulsar observations.« less

  7. Stochastic Simulation of Biomolecular Networks in Dynamic Environments

    PubMed Central

    Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G.

    2016-01-01

    Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate—using decision-making by a large population of quorum sensing bacteria—that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits. PMID:27248512

  8. Large Eddy Simulation of a Film Cooling Technique with a Plenum

    NASA Astrophysics Data System (ADS)

    Dharmarathne, Suranga; Sridhar, Narendran; Araya, Guillermo; Castillo, Luciano; Parameswaran, Sivapathasund

    2012-11-01

    Factors that affect the film cooling performance have been categorized into three main groups: (i) coolant & mainstream conditions, (ii) hole geometry & configuration, and (iii) airfoil geometry Bogard et al. (2006). The present study focuses on the second group of factors, namely, the modeling of coolant hole and the plenum. It is required to simulate correct physics of the problem to achieve more realistic numerical results. In this regard, modeling of cooling jet hole and the plenum chamber is highly important Iourokina et al. (2006). Substitution of artificial boundary conditions instead of correct plenum design would yield unrealistic results Iourokina et al. (2006). This study attempts to model film cooling technique with a plenum using a Large Eddy Simulation.Incompressible coolant jet ejects to the surface of the plate at an angle of 30° where it meets compressible turbulent boundary layer which simulates the turbine inflow conditions. Dynamic multi-scale approach Araya (2011) is introduced to prescribe turbulent inflow conditions. Simulations are carried out for two different blowing ratios and film cooling effectiveness is calculated for both cases. Results obtained from LES will be compared with experimental results.

  9. A Compact Synchronous Cellular Model of Nonlinear Calcium Dynamics: Simulation and FPGA Synthesis Results.

    PubMed

    Soleimani, Hamid; Drakakis, Emmanuel M

    2017-06-01

    Recent studies have demonstrated that calcium is a widespread intracellular ion that controls a wide range of temporal dynamics in the mammalian body. The simulation and validation of such studies using experimental data would benefit from a fast large scale simulation and modelling tool. This paper presents a compact and fully reconfigurable cellular calcium model capable of mimicking Hopf bifurcation phenomenon and various nonlinear responses of the biological calcium dynamics. The proposed cellular model is synthesized on a digital platform for a single unit and a network model. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed cellular model can mimic the biological calcium behaviors with considerably low hardware overhead. The approach has the potential to speed up large-scale simulations of slow intracellular dynamics by sharing more cellular units in real-time. To this end, various networks constructed by pipelining 10 k to 40 k cellular calcium units are compared with an equivalent simulation run on a standard PC workstation. Results show that the cellular hardware model is, on average, 83 times faster than the CPU version.

  10. Systematic methods for defining coarse-grained maps in large biomolecules.

    PubMed

    Zhang, Zhiyong

    2015-01-01

    Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.

  11. Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator

    NASA Astrophysics Data System (ADS)

    Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.

    2018-02-01

    Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.

  12. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  13. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  14. Large-eddy simulation of flow in a plane, asymmetric diffuser

    NASA Technical Reports Server (NTRS)

    Kaltenbach, Hans-Jakob

    1993-01-01

    Recent improvements in subgrid-scale modeling as well as increases in computer power make it feasible to investigate flows using large-eddy simulation (LES) which have been traditionally studied with techniques based on Reynolds averaging. However, LES has not yet been applied to many flows of immediate technical interest. Preliminary results from LES of a plane diffuser flow are described. The long term goal of this work is to investigate flow separation as well as separation control in ducts and ramp-like geometries.

  15. Large-Eddy Simulations of Dust Devils and Convective Vortices

    NASA Astrophysics Data System (ADS)

    Spiga, Aymeric; Barth, Erika; Gu, Zhaolin; Hoffmann, Fabian; Ito, Junshi; Jemmett-Smith, Bradley; Klose, Martina; Nishizawa, Seiya; Raasch, Siegfried; Rafkin, Scot; Takemi, Tetsuya; Tyler, Daniel; Wei, Wei

    2016-11-01

    In this review, we address the use of numerical computations called Large-Eddy Simulations (LES) to study dust devils, and the more general class of atmospheric phenomena they belong to (convective vortices). We describe the main elements of the LES methodology. We review the properties, statistics, and variability of dust devils and convective vortices resolved by LES in both terrestrial and Martian environments. The current challenges faced by modelers using LES for dust devils are also discussed in detail.

  16. Architectural Large Constructed Environment. Modeling and Interaction Using Dynamic Simulations

    NASA Astrophysics Data System (ADS)

    Fiamma, P.

    2011-09-01

    How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.

  17. Multi-scale Modeling of Arctic Clouds

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  18. Study of cosmic ray events with high muon multiplicity using the ALICE detector at the CERN Large Hadron Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collaboration: ALICE Collaboration

    2016-01-01

    ALICE is one of four large experiments at the CERN Large Hadron Collider near Geneva, specially designed to study particle production in ultra-relativistic heavy-ion collisions. Located 52 meters underground with 28 meters of overburden rock, it has also been used to detect muons produced by cosmic ray interactions in the upper atmosphere. In this paper, we present the multiplicity distribution of these atmospheric muons and its comparison with Monte Carlo simulations. This analysis exploits the large size and excellent tracking capability of the ALICE Time Projection Chamber. A special emphasis is given to the study of high multiplicity events containingmore » more than 100 reconstructed muons and corresponding to a muon areal density ρ{sub μ} > 5.9 m{sup −2}. Similar events have been studied in previous underground experiments such as ALEPH and DELPHI at LEP. While these experiments were able to reproduce the measured muon multiplicity distribution with Monte Carlo simulations at low and intermediate multiplicities, their simulations failed to describe the frequency of the highest multiplicity events. In this work we show that the high multiplicity events observed in ALICE stem from primary cosmic rays with energies above 10{sup 16} eV and that the frequency of these events can be successfully described by assuming a heavy mass composition of primary cosmic rays in this energy range. The development of the resulting air showers was simulated using the latest version of QGSJET to model hadronic interactions. This observation places significant constraints on alternative, more exotic, production mechanisms for these events.« less

  19. Statistical Analyses of Satellite Cloud Object Data from CERES. Part III; Comparison with Cloud-Resolving Model Simulations of Tropical Convective Clouds

    NASA Technical Reports Server (NTRS)

    Luo, Yali; Xu, Kuan-Man; Wielicki, Bruce A.; Wong, Takmeng; Eitzen, Zachary A.

    2007-01-01

    The present study evaluates the ability of a cloud-resolving model (CRM) to simulate the physical properties of tropical deep convective cloud objects identified from a Clouds and the Earth s Radiant Energy System (CERES) data product. The emphasis of this study is the comparisons among the small-, medium- and large-size categories of cloud objects observed during March 1998 and between the large-size categories of cloud objects observed during March 1998 (strong El Ni o) and March 2000 (weak La Ni a). Results from the CRM simulations are analyzed in a way that is consistent with the CERES retrieval algorithm and they are averaged to match the scale of the CERES satellite footprints. Cloud physical properties are analyzed in terms of their summary histograms for each category. It is found that there is a general agreement in the overall shapes of all cloud physical properties between the simulated and observed distributions. Each cloud physical property produced by the CRM also exhibits different degrees of disagreement with observations over different ranges of the property. The simulated cloud tops are generally too high and cloud top temperatures are too low except for the large-size category of March 1998. The probability densities of the simulated top-of-the-atmosphere (TOA) albedos for all four categories are underestimated for high albedos, while those of cloud optical depth are overestimated at its lowest bin. These disagreements are mainly related to uncertainties in the cloud microphysics parameterization and inputs such as cloud ice effective size to the radiation calculation. Summary histograms of cloud optical depth and TOA albedo from the CRM simulations of the large-size category of cloud objects do not differ significantly between the March 1998 and 2000 periods, consistent with the CERES observations. However, the CRM is unable to reproduce the significant differences in the observed cloud top height while it overestimates the differences in the observed outgoing longwave radiation and cloud top temperature between the two periods. Comparisons between the CRM results and the observations for most parameters in March 1998 consistently show that both the simulations and observations have larger differences between the large- and small-size categories than between the large- and medium-size, or between the medium- and small-size categories. However, the simulated cloud properties do not change as much with size as observed. These disagreements are likely related to the spatial averaging of the forcing data and the mismatch in time and in space between the numerical weather prediction model from which the forcing data are produced and the CERES observed cloud systems.

  20. A data-driven dynamics simulation framework for railway vehicles

    NASA Astrophysics Data System (ADS)

    Nie, Yinyu; Tang, Zhao; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2018-03-01

    The finite element (FE) method is essential for simulating vehicle dynamics with fine details, especially for train crash simulations. However, factors such as the complexity of meshes and the distortion involved in a large deformation would undermine its calculation efficiency. An alternative method, the multi-body (MB) dynamics simulation provides satisfying time efficiency but limited accuracy when highly nonlinear dynamic process is involved. To maintain the advantages of both methods, this paper proposes a data-driven simulation framework for dynamics simulation of railway vehicles. This framework uses machine learning techniques to extract nonlinear features from training data generated by FE simulations so that specific mesh structures can be formulated by a surrogate element (or surrogate elements) to replace the original mechanical elements, and the dynamics simulation can be implemented by co-simulation with the surrogate element(s) embedded into a MB model. This framework consists of a series of techniques including data collection, feature extraction, training data sampling, surrogate element building, and model evaluation and selection. To verify the feasibility of this framework, we present two case studies, a vertical dynamics simulation and a longitudinal dynamics simulation, based on co-simulation with MATLAB/Simulink and Simpack, and a further comparison with a popular data-driven model (the Kriging model) is provided. The simulation result shows that using the legendre polynomial regression model in building surrogate elements can largely cut down the simulation time without sacrifice in accuracy.

  1. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.

  2. A novel simulation methodology merging source-sink dynamics and landscape connectivity

    EPA Science Inventory

    Source-sink dynamics are an emergent property of complex species-landscape interactions. This study explores the patterns of source and sink behavior that become established across a large landscape, using a simulation model for the northern spotted owl (Strix occidentalis cauri...

  3. General-relativistic Large-eddy Simulations of Binary Neutron Star Mergers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radice, David, E-mail: dradice@astro.princeton.edu

    The flow inside remnants of binary neutron star (NS) mergers is expected to be turbulent, because of magnetohydrodynamics instability activated at scales too small to be resolved in simulations. To study the large-scale impact of these instabilities, we develop a new formalism, based on the large-eddy simulation technique, for the modeling of subgrid-scale turbulent transport in general relativity. We apply it, for the first time, to the simulation of the late-inspiral and merger of two NSs. We find that turbulence can significantly affect the structure and survival time of the merger remnant, as well as its gravitational-wave (GW) and neutrinomore » emissions. The former will be relevant for GW observation of merging NSs. The latter will affect the composition of the outflow driven by the merger and might influence its nucleosynthetic yields. The accretion rate after black hole formation is also affected. Nevertheless, we find that, for the most likely values of the turbulence mixing efficiency, these effects are relatively small and the GW signal will be affected only weakly by the turbulence. Thus, our simulations provide a first validation of all existing post-merger GW models.« less

  4. Wind turbine wakes in forest and neutral plane wall boundary layer large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Schröttle, Josef; Piotrowski, Zbigniew; Gerz, Thomas; Englberger, Antonia; Dörnbrack, Andreas

    2016-09-01

    Wind turbine wake flow characteristics are studied in a strongly sheared and turbulent forest boundary layer and a neutral plane wall boundary layer flow. The reference simulations without wind turbine yield similar results as earlier large-eddy simulations by Shaw and Schumann (1992) and Porte-Agel et al. (2000). To use the fields from the homogeneous turbulent boundary layers on the fly as inflow fields for the wind turbine wake simulations, a new and efficient methodology was developed for the multiscale geophysical flow solver EULAG. With this method fully developed turbulent flow fields can be achieved upstream of the wind turbine which are independent of the wake flow. The large-eddy simulations reproduce known boundary-layer statistics as mean wind profile, momentum flux profile, and eddy dissipation rate of the plane wall and the forest boundary layer. The wake velocity deficit is more asymmetric above the forest and recovers faster downstream compared to the velocity deficit in the plane wall boundary layer. This is due to the inflection point in the mean streamwise velocity profile with corresponding turbulent coherent structures of high turbulence intensity in the strong shear flow above the forest.

  5. Near-Surface Meteorology During the Arctic Summer Cloud Ocean Study (ASCOS): Evaluation of Reanalyses and Global Climate Models.

    NASA Technical Reports Server (NTRS)

    De Boer, G.; Shupe, M.D.; Caldwell, P.M.; Bauer, Susanne E.; Persson, O.; Boyle, J.S.; Kelley, M.; Klein, S.A.; Tjernstrom, M.

    2014-01-01

    Atmospheric measurements from the Arctic Summer Cloud Ocean Study (ASCOS) are used to evaluate the performance of three atmospheric reanalyses (European Centre for Medium Range Weather Forecasting (ECMWF)- Interim reanalysis, National Center for Environmental Prediction (NCEP)-National Center for Atmospheric Research (NCAR) reanalysis, and NCEP-DOE (Department of Energy) reanalysis) and two global climate models (CAM5 (Community Atmosphere Model 5) and NASA GISS (Goddard Institute for Space Studies) ModelE2) in simulation of the high Arctic environment. Quantities analyzed include near surface meteorological variables such as temperature, pressure, humidity and winds, surface-based estimates of cloud and precipitation properties, the surface energy budget, and lower atmospheric temperature structure. In general, the models perform well in simulating large-scale dynamical quantities such as pressure and winds. Near-surface temperature and lower atmospheric stability, along with surface energy budget terms, are not as well represented due largely to errors in simulation of cloud occurrence, phase and altitude. Additionally, a development version of CAM5, which features improved handling of cloud macro physics, has demonstrated to improve simulation of cloud properties and liquid water amount. The ASCOS period additionally provides an excellent example of the benefits gained by evaluating individual budget terms, rather than simply evaluating the net end product, with large compensating errors between individual surface energy budget terms that result in the best net energy budget.

  6. Large eddy simulation of transitional flow in an idealized stenotic blood vessel: evaluation of subgrid scale models.

    PubMed

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H

    2014-07-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.

  7. Sensitivity of CO2 Simulation in a GCM to the Convective Transport Algorithms

    NASA Technical Reports Server (NTRS)

    Zhu, Z.; Pawson, S.; Collatz, G. J.; Gregg, W. W.; Kawa, S. R.; Baker, D.; Ott, L.

    2014-01-01

    Convection plays an important role in the transport of heat, moisture and trace gases. In this study, we simulated CO2 concentrations with an atmospheric general circulation model (GCM). Three different convective transport algorithms were used. One is a modified Arakawa-Shubert scheme that was native to the GCM; two others used in two off-line chemical transport models (CTMs) were added to the GCM here for comparison purposes. Advanced CO2 surfaced fluxes were used for the simulations. The results were compared to a large quantity of CO2 observation data. We find that the simulation results are sensitive to the convective transport algorithms. Overall, the three simulations are quite realistic and similar to each other in the remote marine regions, but are significantly different in some land regions with strong fluxes such as Amazon and Siberia during the convection seasons. Large biases against CO2 measurements are found in these regions in the control run, which uses the original GCM. The simulation with the simple diffusive algorithm is better. The difference of the two simulations is related to the very different convective transport speed.

  8. Multiscale Simulations of ALD in Cross Flow Reactors

    DOE PAGES

    Yanguas-Gil, Angel; Libera, Joseph A.; Elam, Jeffrey W.

    2014-08-13

    In this study, we have developed a multiscale simulation code that allows us to study the impact of surface chemistry on the coating of large area substrates with high surface area/high aspect-ratio features. Our code, based on open-source libraries, takes advantage of the ALD surface chemistry to achieve an extremely efficient two-way coupling between reactor and feature length scales, and it can provide simulated quartz crystal microbalance and mass spectrometry data at any point of the reactor. By combining experimental surface characterization with simple analysis of growth profiles in a tubular cross flow reactor, we are able to extract amore » minimal set of reactions to effectively model the surface chemistry, including the presence of spurious CVD, to evaluate the impact of surface chemistry on the coating of large, high surface area substrates.« less

  9. Simulation studies of self-organization of microtubules and molecular motors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jian, Z.; Karpeev, D.; Aranson, I. S.

    We perform Monte Carlo type simulation studies of self-organization of microtubules interacting with molecular motors. We model microtubules as stiff polar rods of equal length exhibiting anisotropic diffusion in the plane. The molecular motors are implicitly introduced by specifying certain probabilistic collision rules resulting in realignment of the rods. This approximation of the complicated microtubule-motor interaction by a simple instant collision allows us to bypass the 'computational bottlenecks' associated with the details of the diffusion and the dynamics of motors and the reorientation of microtubules. Consequently, we are able to perform simulations of large ensembles of microtubules and motors onmore » a very large time scale. This simple model reproduces all important phenomenology observed in in vitro experiments: Formation of vortices for low motor density and raylike asters and bundles for higher motor density.« less

  10. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  11. Radar and microphysical characteristics of convective storms simulated from a numerical model using a new microphysical parameterization

    NASA Technical Reports Server (NTRS)

    Ferrier, Brad S.; Tao, Wei-Kuo; Simpson, Joanne

    1991-01-01

    The basic features of a new and improved bulk-microphysical parameterization capable of simulating the hydrometeor structure of convective systems in all types of large-scale environments (with minimal adjustment of coefficients) are studied. Reflectivities simulated from the model are compared with radar observations of an intense midlatitude convective system. Simulated reflectivities using the novel four-class ice scheme with a microphysical parameterization rain distribution at 105 min are illustrated. Preliminary results indicate that this new ice scheme works efficiently in simulating midlatitude continental storms.

  12. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  13. Improved turbulence models based on large eddy simulation of homogeneous, incompressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Bardino, J.; Ferziger, J. H.; Reynolds, W. C.

    1983-01-01

    The physical bases of large eddy simulation and subgrid modeling are studied. A subgrid scale similarity model is developed that can account for system rotation. Large eddy simulations of homogeneous shear flows with system rotation were carried out. Apparently contradictory experimental results were explained. The main effect of rotation is to increase the transverse length scales in the rotation direction, and thereby decrease the rates of dissipation. Experimental results are shown to be affected by conditions at the turbulence producing grid, which make the initial states a function of the rotation rate. A two equation model is proposed that accounts for effects of rotation and shows good agreement with experimental results. In addition, a Reynolds stress model is developed that represents the turbulence structure of homogeneous shear flows very well and can account also for the effects of system rotation.

  14. Canopy BRF simulation of forest with different crown shape and height in larger scale based on Radiosity method

    NASA Astrophysics Data System (ADS)

    Song, Jinling; Qu, Yonghua; Wang, Jindi; Wan, Huawei; Liu, Xiaoqing

    2007-06-01

    Radiosity method is based on the computer simulation of 3D real structures of vegetations, such as leaves, branches and stems, which are composed by many facets. Using this method we can simulate the canopy reflectance and its bidirectional distribution of the vegetation canopy in visible and NIR regions. But with vegetations are more complex, more facets to compose them, so large memory and lots of time to calculate view factors are required, which are the choke points of using Radiosity method to calculate canopy BRF of lager scale vegetation scenes. We derived a new method to solve the problem, and the main idea is to abstract vegetation crown shapes and to simplify their structures, which can lessen the number of facets. The facets are given optical properties according to the reflectance, transmission and absorption of the real structure canopy. Based on the above work, we can simulate the canopy BRF of the mix scenes with different species vegetation in the large scale. In this study, taking broadleaf trees as an example, based on their structure characteristics, we abstracted their crowns as ellipsoid shells, and simulated the canopy BRF in visible and NIR regions of the large scale scene with different crown shape and different height ellipsoids. Form this study, we can conclude: LAI, LAD the probability gap, the sunlit and shaded surfaces are more important parameter to simulate the simplified vegetation canopy BRF. And the Radiosity method can apply us canopy BRF data in any conditions for our research.

  15. Multibody dynamic simulation of knee contact mechanics

    PubMed Central

    Bei, Yanhong; Fregly, Benjamin J.

    2006-01-01

    Multibody dynamic musculoskeletal models capable of predicting muscle forces and joint contact pressures simultaneously would be valuable for studying clinical issues related to knee joint degeneration and restoration. Current three-dimensional multi-body knee models are either quasi-static with deformable contact or dynamic with rigid contact. This study proposes a computationally efficient methodology for combining multibody dynamic simulation methods with a deformable contact knee model. The methodology requires preparation of the articular surface geometry, development of efficient methods to calculate distances between contact surfaces, implementation of an efficient contact solver that accounts for the unique characteristics of human joints, and specification of an application programming interface for integration with any multibody dynamic simulation environment. The current implementation accommodates natural or artificial tibiofemoral joint models, small or large strain contact models, and linear or nonlinear material models. Applications are presented for static analysis (via dynamic simulation) of a natural knee model created from MRI and CT data and dynamic simulation of an artificial knee model produced from manufacturer’s CAD data. Small and large strain natural knee static analyses required 1 min of CPU time and predicted similar contact conditions except for peak pressure, which was higher for the large strain model. Linear and nonlinear artificial knee dynamic simulations required 10 min of CPU time and predicted similar contact force and torque but different contact pressures, which were lower for the nonlinear model due to increased contact area. This methodology provides an important step toward the realization of dynamic musculoskeletal models that can predict in vivo knee joint motion and loading simultaneously. PMID:15564115

  16. Large-Angle Scattering of Multi-GeV Muons on Thin Lead Targets

    NASA Astrophysics Data System (ADS)

    Longhin, A.; Paoloni, A.; Pupilli, F.

    2015-10-01

    The probability of large-angle scattering for multi-GeV muons in lead targets with a thickness of O(10 - 1) radiation lengths is studied. The new estimates presented here are based both on simulation programs (GEANT4 libraries) and theoretical calculations. In order to validate the results provided by simulation, a comparison is drawn with experimental data from the literature. This study is particularly relevant when applied to muons originating from νμ CC interactions of CNGS beam neutrinos. In that circumstance the process under study represents the dominant background for the νμ → ντ search in the τ→ μ channel for the OPERA experiment at LNGS. Finally we also investigate, in the CNGS context, possible contributions from the muon photo-nuclear process which might in principle also produce a large-angle muon scattering signature in the detector.

  17. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  18. APEX Model Simulation for Row Crop Watersheds with Agroforestry and Grass Buffers

    USDA-ARS?s Scientific Manuscript database

    Watershed model simulation has become an important tool in studying ways and means to reduce transport of agricultural pollutants. Conducting field experiments to assess buffer influences on water quality are constrained by the large-scale nature of watersheds, high experimental costs, private owner...

  19. Thermodynamic sensitivities in observed and simulated extreme-rain-producing mesoscale convective systems

    NASA Astrophysics Data System (ADS)

    Schumacher, R. S.; Peters, J. M.

    2015-12-01

    Mesoscale convective systems (MCSs) are responsible for a large fraction of warm-season extreme rainfall events over the continental United States, as well as other midlatitude regions globally. The rainfall production in these MCSs is determined by numerous factors, including the large-scale forcing for ascent, the organization of the convection, cloud microphysical processes, and the surrounding thermodynamic and kinematic environment. Furthermore, heavy-rain-producing MCSs are most common at night, which means that well-studied mechanisms for MCS maintenance and organization such as cold pools (gravity currents) are not always at work. In this study, we use numerical model simulations and recent field observations to investigate the sensitivity of low-level MCS structures, and their influences on rainfall, to the details of the thermodynamic environment. In particular, small alterations to the initial conditions in idealized and semi-idealized simulations result in comparatively large precipitation changes, both in terms of the intensity and the spatial distribution. The uncertainties in the thermodynamic enviroments in the model simulations will be compared with high-resolution observations from the Plains Elevated Convection At Night (PECAN) field experiment in 2015. The results have implications for the paradigms of "surface-based" versus "elevated" convection, as well as for the predictability of warm-season convective rainfall.

  20. Numerical Estimation of the Outer Bank Resistance Characteristics in AN Evolving Meandering River

    NASA Astrophysics Data System (ADS)

    Wang, D.; Konsoer, K. M.; Rhoads, B. L.; Garcia, M. H.; Best, J.

    2017-12-01

    Few studies have examined the three-dimensional flow structure and its interaction with bed morphology within elongate loops of large meandering rivers. The present study uses a numerical model to simulate the flow pattern and sediment transport, especially the flow close to the outer-bank, at two elongate meandering loops in Wabash River, USA. The numerical grid for the model is based on a combination of airborne LIDAR data on floodplains and the multibeam data within the river channel. A Finite Element Method (FEM) is used to solve the non-hydrostatic RANS equation using a K-epsilon turbulence closure scheme. High-resolution topographic data allows detailed numerical simulation of flow patterns along the outer bank and model calibration involves comparing simulated velocities to ADCP measurements at 41 cross sections near this bank. Results indicate that flow along the outer bank is strongly influenced by large resistance elements, including woody debris, large erosional scallops within the bank face, and outcropping bedrock. In general, patterns of bank migration conform with zones of high near-bank velocity and shear stress. Using the existing model, different virtual events can be simulated to explore the impacts of different resistance characteristics on patterns of flow, sediment transport, and bank erosion.

  1. A Large-eddy Simulation Study of Vertical Axis Wind Turbine Wakes in the Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Shamsoddin, Sina; Porté-Agel, Fernando

    2016-04-01

    Vertical axis wind turbines (VAWTs) offer some advantages over their horizontal axis counterparts, and are being considered as a viable alternative to conventional horizontal axis wind turbines (HAWTs). Nevertheless, a relative shortage of scientific, academic and technical investigations of VAWTs is observed in the wind energy community with respect to HAWTs. Having this in mind, in this work, we aim to study the wake of a single VAWT, placed in the atmospheric boundary layer, using large-eddy simulation (LES) coupled with actuator line model (ALM). It is noteworthy that this is the first time that such a study is being performed. To do this, for a typical 1 MW VAWT design, first, the variation of power coefficient with both the chord length of the blades and the tip-speed ratio is analyzed using LES-ALM, and an optimum combination of chord length and tip-speed ratio is obtained. Subsequently, the wake of a VAWT with these optimum specifications is thoroughly examined by showing different relevant mean and turbulent wake flow statistics. Keywords: vertical axis wind turbine (VAWT); VAWT wake; Atmospheric Boundary Layer (ABL); large eddy simulation (LES); actuator line model (ALM); turbulence.

  2. Large eddy simulation of a wing-body junction flow

    NASA Astrophysics Data System (ADS)

    Ryu, Sungmin; Emory, Michael; Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca

    2014-11-01

    We present numerical simulations of the wing-body junction flow experimentally investigated by Devenport & Simpson (1990). Wall-junction flows are common in engineering applications but relevant flow physics close to the corner region is not well understood. Moreover, performance of turbulence models for the body-junction case is not well characterized. Motivated by the insufficient investigations, we have numerically investigated the case with Reynolds-averaged Naiver-Stokes equation (RANS) and Large Eddy Simulation (LES) approaches. The Vreman model applied for the LES and SST k- ω model for the RANS simulation are validated focusing on the ability to predict turbulence statistics near the junction region. Moreover, a sensitivity study of the form of the Vreman model will also be presented. This work is funded under NASA Cooperative Agreement NNX11AI41A (Technical Monitor Dr. Stephen Woodruff)

  3. Soapy: an adaptive optics simulation written purely in Python for rapid concept development

    NASA Astrophysics Data System (ADS)

    Reeves, Andrew

    2016-07-01

    Soapy is a newly developed Adaptive Optics (AO) simulation which aims be a flexible and fast to use tool-kit for many applications in the field of AO. It is written purely in the Python language, adding to and taking advantage of the already rich ecosystem of scientific libraries and programs. The simulation has been designed to be extremely modular, such that each component can be used stand-alone for projects which do not require a full end-to-end simulation. Ease of use, modularity and code clarity have been prioritised at the expense of computational performance. Though this means the code is not yet suitable for large studies of Extremely Large Telescope AO systems, it is well suited to education, exploration of new AO concepts and investigations of current generation telescopes.

  4. Large Eddy Simulation of complex sidearms subject to solar radiation and surface cooling.

    PubMed

    Dittko, Karl A; Kirkpatrick, Michael P; Armfield, Steven W

    2013-09-15

    Large Eddy Simulation (LES) is used to model two lake sidearms subject to heating from solar radiation and cooling from a surface flux. The sidearms are part of Lake Audrey, NJ, USA and Lake Alexandrina, SA, Australia. The simulation domains are created using bathymetry data and the boundary is modelled with an Immersed Boundary Method. We investigate the cooling and heating phases with separate quasi-steady state simulations. Differential heating occurs in the cavity due to the changing depth. The resulting temperature gradients drive lateral flows. These flows are the dominant transport process in the absence of wind. Study in this area is important in water quality management as the lateral circulation can carry particles and various pollutants, transporting them to and mixing them with the main lake body. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. An RSM Study of the Effects of Simulation Work and Metamodel Specification on the Statistical Quality of Metamodel Estimates

    DTIC Science & Technology

    1994-03-01

    optimize, and perform "what-if" analysis on a complicated simulation model of the greenhouse effect . Regression metamodels were applied to several modules of...the large integrated assessment model of the greenhouse effect . In this study, the metamodels gave "acceptable forecast errors" and were shown to

  6. The Graphical Display of Simulation Results, with Applications to the Comparison of Robust IRT Estimators of Ability.

    ERIC Educational Resources Information Center

    Thissen, David; Wainer, Howard

    Simulation studies of the performance of (potentially) robust statistical estimation produce large quantities of numbers in the form of performance indices of the various estimators under various conditions. This report presents a multivariate graphical display used to aid in the digestion of the plentiful results in a current study of Item…

  7. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  8. WRF nested large-eddy simulations of deep convection during SEAC4RS

    NASA Astrophysics Data System (ADS)

    Heath, Nicholas K.; Fuelberg, Henry E.; Tanelli, Simone; Turk, F. Joseph; Lawson, R. Paul; Woods, Sarah; Freeman, Sean

    2017-04-01

    Large-eddy simulations (LES) and observations are often combined to increase our understanding and improve the simulation of deep convection. This study evaluates a nested LES method that uses the Weather Research and Forecasting (WRF) model and, specifically, tests whether the nested LES approach is useful for studying deep convection during a real-world case. The method was applied on 2 September 2013, a day of continental convection that occurred during the Studies of Emissions and Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC4RS) campaign. Mesoscale WRF output (1.35 km grid length) was used to drive a nested LES with 450 m grid spacing, which then drove a 150 m domain. Results reveal that the 450 m nested LES reasonably simulates observed reflectivity distributions and aircraft-observed in-cloud vertical velocities during the study period. However, when examining convective updrafts, reducing the grid spacing to 150 m worsened results. We find that the simulated updrafts in the 150 m run become too diluted by entrainment, thereby generating updrafts that are weaker than observed. Lastly, the 450 m simulation is combined with observations to study the processes forcing strong midlevel cloud/updraft edge downdrafts that were observed on 2 September. Results suggest that these strong downdrafts are forced by evaporative cooling due to mixing and by perturbation pressure forces acting to restore mass continuity around neighboring updrafts. We conclude that the WRF nested LES approach, with further development and evaluation, could potentially provide an effective method for studying deep convection in real-world cases.

  9. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.

  10. Simulator study of flight characteristics of a large twin-fuselage cargo transport airplane during approach and landing

    NASA Technical Reports Server (NTRS)

    Grantham, W. D.; Deal, P. L.; Keyser, G. L., Jr.; Smith, P. M.

    1983-01-01

    A six degree-of-freedom, ground-based simulator study was conducted to evaluate the low speed flight characteristics of a twin fuselage cargo transport airplane and to compare these characteristics with those of a large, single fuselage (reference) transport configuration which was similar to the Lockheed C-5C airplane. The primary piloting task was the approach and landing. The results indicated that in order to achieve "acceptable' low speed handling qualities on the twin fuselage concept, considerable stability and control augmentation was required, and although the augmented airplane could be landed safely under adverse conditions, the roll performance of the aircraft had to be improved appreciably before the handling qualities were rated as being "satisfactory.' These ground-based simulation results indicated that a value of t sub phi = 30 (time required to bank 30 deg) less than 6 sec should result in "acceptable' roll response characteristics, and when t sub phi = 30 is less than 3.8 sec, "satisfactory' roll response should be attainable on such large and unusually configured aircraft as the subject twin fuselage cargo transport concept.

  11. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE PAGES

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  12. RACORO Continental Boundary Layer Cloud Investigations: 1. Case Study Development and Ensemble Large-Scale Forcings

    NASA Technical Reports Server (NTRS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; hide

    2015-01-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, kappa, are derived from observations to be approximately 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  13. RACORO continental boundary layer cloud investigations: 1. Case study development and ensemble large-scale forcings

    NASA Astrophysics Data System (ADS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  14. The formation of disc galaxies in high-resolution moving-mesh cosmological simulations

    NASA Astrophysics Data System (ADS)

    Marinacci, Federico; Pakmor, Rüdiger; Springel, Volker

    2014-01-01

    We present cosmological hydrodynamical simulations of eight Milky Way-sized haloes that have been previously studied with dark matter only in the Aquarius project. For the first time, we employ the moving-mesh code AREPO in zoom simulations combined with a comprehensive model for galaxy formation physics designed for large cosmological simulations. Our simulations form in most of the eight haloes strongly disc-dominated systems with realistic rotation curves, close to exponential surface density profiles, a stellar mass to halo mass ratio that matches expectations from abundance matching techniques, and galaxy sizes and ages consistent with expectations from large galaxy surveys in the local Universe. There is no evidence for any dark matter core formation in our simulations, even so they include repeated baryonic outflows by supernova-driven winds and black hole quasar feedback. For one of our haloes, the object studied in the recent `Aquila' code comparison project, we carried out a resolution study with our techniques, covering a dynamic range of 64 in mass resolution. Without any change in our feedback parameters, the final galaxy properties are reassuringly similar, in contrast to other modelling techniques used in the field that are inherently resolution dependent. This success in producing realistic disc galaxies is reached, in the context of our interstellar medium treatment, without resorting to a high density threshold for star formation, a low star formation efficiency, or early stellar feedback, factors deemed crucial for disc formation by other recent numerical studies.

  15. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  16. Software for Brain Network Simulations: A Comparative Study

    PubMed Central

    Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.

    2017-01-01

    Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687

  17. Tritium trick

    NASA Technical Reports Server (NTRS)

    Green, W. V.; Zukas, E. G.; Eash, D. T.

    1971-01-01

    Large controlled amounts of helium in uniform concentration in thick samples can be obtained through the radioactive decay of dissolved tritium gas to He3. The term, tritium trick, applies to the case when helium, added by this method, is used to simulate (n,alpha) production of helium in simulated hard flux radiation damage studies.

  18. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  19. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    USDA-ARS?s Scientific Manuscript database

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  20. A simulation study of particle energization observed by THEMIS spacecraft during a substorm

    NASA Astrophysics Data System (ADS)

    Ashour-Abdalla, Maha; Bosqued, Jean-Michel; El-Alaoui, Mostafa; Peroomian, Vahe; Zhou, Meng; Richard, Robert; Walker, Raymond; Runov, Andrei; Angelopoulos, Vassilis

    2009-09-01

    Energetic ions with hundreds of keV energy are frequently observed in the near-Earth tail during magnetospheric substorms. We examined the sources and acceleration of ions during a magnetospheric substorm on 1 March 2008 by using Time History of Events and Macroscale Interactions during Substorms (THEMIS) and Cluster observations and numerical simulations. Four of the THEMIS spacecraft were aligned at yGSM = 6 RE during a very large substorm (AE = 1200) while the Cluster spacecraft were located about 5 RE above the auroral ionosphere. For 2 h before the substorm, Cluster observed ionospheric oxygen flowing out into the magnetosphere. After substorm onset the THEMIS P3 and P4 spacecraft located in the near-Earth tail (xGSM = -9 RE and -8 RE, respectively) observed large fluxes of energetic ions up to 500 keV. We used calculations of millions of ions of solar wind and ionospheric origin in the time-dependent electric and magnetic fields from a global magnetohydrodynamic simulation of this event to study the source of these ions and their acceleration. The simulation did a good job of reproducing the particle observations. Both solar wind protons and ionospheric oxygen were accelerated by nonadiabatic motion across large (>˜5 mV/m) total electric fields (both potential and induced). The acceleration occurred in the "wall" region of the near-Earth tail where nonadiabatic motion dominates over convection and the particles move rapidly across the tail. The acceleration occurred mostly in regions with large electric fields and nonadiabatic motion. There was relatively little acceleration in regions with large electric fields and adiabatic motion or small electric fields and nonadiabatic motion. Prior to substorm onset, ionospheric ions were a significant contributor to the cross-tail current, but after onset, solar wind ions become more dominant.

  1. Earthquakes and aseismic creep associated with growing fault-related folds

    NASA Astrophysics Data System (ADS)

    Burke, C. C.; Johnson, K. M.

    2017-12-01

    Blind thrust faults overlain by growing anticlinal folds pose a seismic risk to many urban centers in the world. A large body of research has focused on using fold and growth strata geometry to infer the rate of slip on the causative fault and the distribution of off-fault deformation. However, because we have had few recorded large earthquakes on blind faults underlying folds, it remains unclear how much of the folding occurs during large earthquakes or during the interseismic period accommodated by aseismic creep. Numerous kinematic and mechanical models as well as field observations demonstrate that flexural slip between sedimentary layering is an important mechanism of fault-related folding. In this study, we run boundary element models of flexural-slip fault-related folding to examine the extent to which energy is released seismically or aseismically throughout the evolution of the fold and fault. We assume a fault imbedded in viscoelastic mechanical layering under frictional contact. We assign depth-dependent frictional properties and adopt a rate-state friction formulation to simulate slip over time. We find that in many cases, a large percentage (greater than 50%) of fold growth is accomplished by aseismic creep at bedding and fault contacts. The largest earthquakes tend to occur on the fault, but a significant portion of the seismicity is distributed across bedding contacts through the fold. We are currently working to quantify these results using a large number of simulations with various fold and fault geometries. Result outputs include location, duration, and magnitude of events. As more simulations are completed, these results from different fold and fault geometries will provide insight into how much folding occurs from these slip events. Generalizations from these simulations can be compared with observations of active fault-related folds and used in the future to inform seismic hazard studies.

  2. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  3. RANS Simulations using OpenFOAM Software

    DTIC Science & Technology

    2016-01-01

    Averaged Navier- Stokes (RANS) simulations is described and illustrated by applying the simpleFoam solver to two case studies; two dimensional flow...to run in parallel over large processor arrays. The purpose of this report is to illustrate and test the use of the steady-state Reynolds Averaged ...Group in the Maritime Platforms Division he has been simulating fluid flow around ships and submarines using finite element codes, Lagrangian vortex

  4. Quantifying Turbulent Kinetic Energy in an Aortic Coarctation with Large Eddy Simulation and Magnetic Resonance Imaging

    NASA Astrophysics Data System (ADS)

    Lantz, Jonas; Ebbers, Tino; Karlsson, Matts

    2012-11-01

    In this study, turbulent kinetic energy (TKE) in an aortic coarctation was studied using both a numerical technique (large eddy simulation, LES) and in vivo measurements using magnetic resonance imaging (MRI). High levels of TKE are undesirable, as kinetic energy is extracted from the mean flow to feed the turbulent fluctuations. The patient underwent surgery to widen the coarctation, and the flow before and after surgery was computed and compared to MRI measurements. The resolution of the MRI was about 7 × 7 voxels in axial cross-section while 50x50 mesh cells with increased resolution near the walls was used in the LES simulation. In general, the numerical simulations and MRI measurements showed that the aortic arch had no or very low levels of TKE, while elevated values were found downstream the coarctation. It was also found that TKE levels after surgery were lowered, indicating that the diameter of the constriction was increased enough to decrease turbulence effects. In conclusion, both the numerical simulation and MRI measurements gave very similar results, thereby validating the simulations and suggesting that MRI measured TKE can be used as an initial estimation in clinical practice, while LES results can be used for detailed quantification and further research of aortic flows.

  5. The Reactivation of Motion influences Size Categorization in a Visuo-Haptic Illusion.

    PubMed

    Rey, Amandine E; Dabic, Stephanie; Versace, Remy; Navarro, Jordan

    2016-09-01

    People simulate themselves moving when they view a picture, read a sentence, or simulate a situation that involves motion. The simulation of motion has often been studied in conceptual tasks such as language comprehension. However, most of these studies investigated the direct influence of motion simulation on tasks inducing motion. This article investigates whether a mo- tion induced by the reactivation of a dynamic picture can influence a task that did not require motion processing. In a first phase, a dynamic picture and a static picture were systematically presented with a vibrotactile stimulus (high or low frequency). The second phase of the experiment used a priming paradigm in which a vibrotactile stimulus was presented alone and followed by pictures of objects. Participants had to categorize objects as large or small relative to their typical size (simulated size). Results showed that when the target object was preceded by the vibrotactile stimulus previously associated with the dynamic picture, participants perceived all the objects as larger and categorized them more quickly when the objects were typically "large" and more slowly when the objects were typically "small." In light of embodied cognition theories, this bias in participants' perception is assumed to be caused by an induced forward motion. generated by the reactivated dynamic picture, which affects simulation of the size of the objects.

  6. A study on large-scale nudging effects in regional climate model simulation

    NASA Astrophysics Data System (ADS)

    Yhang, Yoo-Bin; Hong, Song-You

    2011-05-01

    The large-scale nudging effects on the East Asian summer monsoon (EASM) are examined using the National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM). The NCEP/DOE reanalysis data is used to provide large-scale forcings for RSM simulations, configured with an approximately 50-km grid over East Asia, centered on the Korean peninsula. The RSM with a variant of spectral nudging, that is, the scale selective bias correction (SSBC), is forced by perfect boundary conditions during the summers (June-July-August) from 1979 to 2004. The two summers of 2000 and 2004 are investigated to demonstrate the impact of SSBC on precipitation in detail. It is found that the effect of SSBC on the simulated seasonal precipitation is in general neutral without a discernible advantage. Although errors in large-scale circulation for both 2000 and 2004 are reduced by using the SSBC method, the impact on simulated precipitation is found to be negative in 2000 and positive in 2004 summers. One possible reason for a different effect is that precipitation in the summer of 2004 is characterized by a strong baroclinicity, while precipitation in 2000 is caused by thermodynamic instability. The reduction of convective rainfall over the oceans by the application of the SSBC method seems to play an important role in modeled atmosphere.

  7. Large-eddy simulation of plume dispersion within regular arrays of cubic buildings

    NASA Astrophysics Data System (ADS)

    Nakayama, H.; Jurcakova, K.; Nagai, H.

    2011-04-01

    There is a potential problem that hazardous and flammable materials are accidentally or intentionally released within populated urban areas. For the assessment of human health hazard from toxic substances, the existence of high concentration peaks in a plume should be considered. For the safety analysis of flammable gas, certain critical threshold levels should be evaluated. Therefore, in such a situation, not only average levels but also instantaneous magnitudes of concentration should be accurately predicted. In this study, we perform Large-Eddy Simulation (LES) of plume dispersion within regular arrays of cubic buildings with large obstacle densities and investigate the influence of the building arrangement on the characteristics of mean and fluctuating concentrations.

  8. Simulation framework for intelligent transportation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewing, T.; Doss, E.; Hanebutte, U.

    1996-10-01

    A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less

  9. Observation and Simulations of the Backsplash Effects in High-Energy Gamma-Ray Telescopes Containing a Massive Calorimeter

    NASA Technical Reports Server (NTRS)

    Moiseev, Alexander A.; Ormes, Jonathan F.; Hartman, Robert C.; Johnson, Thomas E.; Mitchell, John W.; Thompson, David J.

    1999-01-01

    Beam test and simulation results are presented for a study of the backsplash effects produced in a high-energy gamma-ray detector containing a massive calorimeter. An empirical formula is developed to estimate the probability (per unit area) of backsplash for different calorimeter materials and thicknesses, different incident particle energies, and at different distances from the calorimeter. The results obtained are applied to the design of Anti-Coincidence Detector (ACD) for the Large Area Telescope (LAT) on the Gamma-ray Large Area Space Telescope (GLAST).

  10. Freak waves in random oceanic sea states.

    PubMed

    Onorato, M; Osborne, A R; Serio, M; Bertone, S

    2001-06-18

    Freak waves are very large, rare events in a random ocean wave train. Here we study their generation in a random sea state characterized by the Joint North Sea Wave Project spectrum. We assume, to cubic order in nonlinearity, that the wave dynamics are governed by the nonlinear Schrödinger (NLS) equation. We show from extensive numerical simulations of the NLS equation how freak waves in a random sea state are more likely to occur for large values of the Phillips parameter alpha and the enhancement coefficient gamma. Comparison with linear simulations is also reported.

  11. Durham extremely large telescope adaptive optics simulation platform.

    PubMed

    Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard

    2007-03-01

    Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.

  12. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  13. Large-scale coherent structures of suspended dust concentration in the neutral atmospheric surface layer: A large-eddy simulation study

    NASA Astrophysics Data System (ADS)

    Zhang, Yangyue; Hu, Ruifeng; Zheng, Xiaojing

    2018-04-01

    Dust particles can remain suspended in the atmospheric boundary layer, motions of which are primarily determined by turbulent diffusion and gravitational settling. Little is known about the spatial organizations of suspended dust concentration and how turbulent coherent motions contribute to the vertical transport of dust particles. Numerous studies in recent years have revealed that large- and very-large-scale motions in the logarithmic region of laboratory-scale turbulent boundary layers also exist in the high Reynolds number atmospheric boundary layer, but their influence on dust transport is still unclear. In this study, numerical simulations of dust transport in a neutral atmospheric boundary layer based on an Eulerian modeling approach and large-eddy simulation technique are performed to investigate the coherent structures of dust concentration. The instantaneous fields confirm the existence of very long meandering streaks of dust concentration, with alternating high- and low-concentration regions. A strong negative correlation between the streamwise velocity and concentration and a mild positive correlation between the vertical velocity and concentration are observed. The spatial length scales and inclination angles of concentration structures are determined, compared with their flow counterparts. The conditionally averaged fields vividly depict that high- and low-concentration events are accompanied by a pair of counter-rotating quasi-streamwise vortices, with a downwash inside the low-concentration region and an upwash inside the high-concentration region. Through the quadrant analysis, it is indicated that the vertical dust transport is closely related to the large-scale roll modes, and ejections in high-concentration regions are the major mechanisms for the upward motions of dust particles.

  14. Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization

    NASA Astrophysics Data System (ADS)

    Lee, Kyungbook; Song, Seok Goo

    2017-09-01

    Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.

  15. A large-eddy simulation study of wake propagation and power production in an array of tidal-current turbines.

    PubMed

    Churchfield, Matthew J; Li, Ye; Moriarty, Patrick J

    2013-02-28

    This paper presents our initial work in performing large-eddy simulations of tidal turbine array flows. First, a horizontally periodic precursor simulation is performed to create turbulent flow data. Then those data are used as inflow into a tidal turbine array two rows deep and infinitely wide. The turbines are modelled using rotating actuator lines, and the finite-volume method is used to solve the governing equations. In studying the wakes created by the turbines, we observed that the vertical shear of the inflow combined with wake rotation causes lateral wake asymmetry. Also, various turbine configurations are simulated, and the total power production relative to isolated turbines is examined. We found that staggering consecutive rows of turbines in the simulated configurations allows the greatest efficiency using the least downstream row spacing. Counter-rotating consecutive downstream turbines in a non-staggered array shows a small benefit. This work has identified areas for improvement. For example, using a larger precursor domain would better capture elongated turbulent structures, and including salinity and temperature equations would account for density stratification and its effect on turbulence. Additionally, the wall shear stress modelling could be improved, and more array configurations could be examined.

  16. NUMERICAL SIMULATION OF NANOINDENTATION AND PATCH CLAMP EXPERIMENTS ON MECHANOSENSITIVE CHANNELS OF LARGE CONDUCTANCE IN ESCHERICHIA COLI

    PubMed Central

    Tang, Yuye; Chen, Xi; Yoo, Jejoong; Yethiraj, Arun; Cui, Qiang

    2010-01-01

    A hierarchical simulation framework that integrates information from all-atom simulations into a finite element model at the continuum level is established to study the mechanical response of a mechanosensitive channel of large conductance (MscL) in bacteria Escherichia Coli (E.coli) embedded in a vesicle formed by the dipalmitoylphosphatidycholine (DPPC) lipid bilayer. Sufficient structural details of the protein are built into the continuum model, with key parameters and material properties derived from molecular mechanics simulations. The multi-scale framework is used to analyze the gating of MscL when the lipid vesicle is subjective to nanoindentation and patch clamp experiments, and the detailed structural transitions of the protein are obtained explicitly as a function of external load; it is currently impossible to derive such information based solely on all-atom simulations. The gating pathways of E.coli-MscL qualitatively agree with results from previous patch clamp experiments. The gating mechanisms under complex indentation-induced deformation are also predicted. This versatile hierarchical multi-scale framework may be further extended to study the mechanical behaviors of cells and biomolecules, as well as to guide and stimulate biomechanics experiments. PMID:21874098

  17. Computer Science Techniques Applied to Parallel Atomistic Simulation

    NASA Astrophysics Data System (ADS)

    Nakano, Aiichiro

    1998-03-01

    Recent developments in parallel processing technology and multiresolution numerical algorithms have established large-scale molecular dynamics (MD) simulations as a new research mode for studying materials phenomena such as fracture. However, this requires large system sizes and long simulated times. We have developed: i) Space-time multiresolution schemes; ii) fuzzy-clustering approach to hierarchical dynamics; iii) wavelet-based adaptive curvilinear-coordinate load balancing; iv) multilevel preconditioned conjugate gradient method; and v) spacefilling-curve-based data compression for parallel I/O. Using these techniques, million-atom parallel MD simulations are performed for the oxidation dynamics of nanocrystalline Al. The simulations take into account the effect of dynamic charge transfer between Al and O using the electronegativity equalization scheme. The resulting long-range Coulomb interaction is calculated efficiently with the fast multipole method. Results for temperature and charge distributions, residual stresses, bond lengths and bond angles, and diffusivities of Al and O will be presented. The oxidation of nanocrystalline Al is elucidated through immersive visualization in virtual environments. A unique dual-degree education program at Louisiana State University will also be discussed in which students can obtain a Ph.D. in Physics & Astronomy and a M.S. from the Department of Computer Science in five years. This program fosters interdisciplinary research activities for interfacing High Performance Computing and Communications with large-scale atomistic simulations of advanced materials. This work was supported by NSF (CAREER Program), ARO, PRF, and Louisiana LEQSF.

  18. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  19. A numerical study on dust devils with implications to global dust budget estimates

    USDA-ARS?s Scientific Manuscript database

    The estimates of the contribution of dust devils (DDs) to the global dust budget have large uncertainties because the dust emission mechanisms in DDs are not yet well understood. In this study, a large-eddy simulation model coupled with a dust scheme is used to investigate DD dust entrainment. DDs a...

  20. Anticipation of the landing shock phenomenon in flight simulation

    NASA Technical Reports Server (NTRS)

    Mcfarland, Richard E.

    1987-01-01

    An aircraft landing may be described as a controlled crash because a runway surface is intercepted. In a simulation model the transition from aerodynamic flight to weight on wheels involves a single computational cycle during which stiff differential equations are activated; with a significant probability these initial conditions are unrealistic. This occurs because of the finite cycle time, during which large restorative forces will accompany unrealistic initial oleo compressions. This problem was recognized a few years ago at Ames Research Center during simulation studies of a supersonic transport. The mathematical model of this vehicle severely taxed computational resources, and required a large cycle time. The ground strike problem was solved by a described technique called anticipation equations. This extensively used technique has not been previously reported. The technique of anticipating a significant event is a useful tool in the general field of discrete flight simulation. For the differential equations representing a landing gear model stiffness, rate of interception and cycle time may combine to produce an unrealistic simulation of the continuum.

  1. Feasibility and concept study to convert the NASA/AMES vertical motion simulator to a helicopter simulator

    NASA Technical Reports Server (NTRS)

    Belsterling, C. A.; Chou, R. C.; Davies, E. G.; Tsui, K. C.

    1978-01-01

    The conceptual design for converting the vertical motion simulator (VMS) to a multi-purpose aircraft and helicopter simulator is presented. A unique, high performance four degrees of freedom (DOF) motion system was developed to permanently replace the present six DOF synergistic system. The new four DOF system has the following outstanding features: (1) will integrate with the two large VMS translational modes and their associated subsystems; (2) can be converted from helicopter to fixed-wing aircraft simulation through software changes only; (3) interfaces with an advanced cab/visual display system of large dimensions; (4) makes maximum use of proven techniques, convenient materials and off-the-shelf components; (5) will operate within the existing building envelope without modifications; (6) can be built within the specified weight limit and avoid compromising VMS performance; (7) provides maximum performance with a minimum of power consumption; (8) simple design minimizes coupling between motions and maximizes reliability; and (9) can be built within existing budgetary figures.

  2. The void spectrum in two-dimensional numerical simulations of gravitational clustering

    NASA Technical Reports Server (NTRS)

    Kauffmann, Guinevere; Melott, Adrian L.

    1992-01-01

    An algorithm for deriving a spectrum of void sizes from two-dimensional high-resolution numerical simulations of gravitational clustering is tested, and it is verified that it produces the correct results where those results can be anticipated. The method is used to study the growth of voids as clustering proceeds. It is found that the most stable indicator of the characteristic void 'size' in the simulations is the mean fractional area covered by voids of diameter d, in a density field smoothed at its correlation length. Very accurate scaling behavior is found in power-law numerical models as they evolve. Eventually, this scaling breaks down as the nonlinearity reaches larger scales. It is shown that this breakdown is a manifestation of the undesirable effect of boundary conditions on simulations, even with the very large dynamic range possible here. A simple criterion is suggested for deciding when simulations with modest large-scale power may systematically underestimate the frequency of larger voids.

  3. Numerical study of wind over breaking waves and generation of spume droplets

    NASA Astrophysics Data System (ADS)

    Yang, Zixuan; Tang, Shuai; Dong, Yu-Hong; Shen, Lian

    2017-11-01

    We present direct numerical simulation (DNS) results on wind over breaking waves. The air and water are simulated as a coherent system. The air-water interface is captured using a coupled level-set and volume-of-fluid method. The initial condition for the simulation is fully-developed wind turbulence over strongly-forced steep waves. Because wave breaking is an unsteady process, we use ensemble averaging of a large number of runs to obtain turbulence statistics. The generation and transport of spume droplets during wave breaking is also simulated. The trajectories of sea spray droplets are tracked using a Lagrangian particle tracking method. The generation of droplets is captured using a kinematic criterion based on the relative velocity of fluid particles of water with respect to the wave phase speed. From the simulation, we observe that the wave plunging generates a large vortex in air, which makes an important contribution to the suspension of sea spray droplets.

  4. Large-eddy and unsteady RANS simulations of a shock-accelerated heavy gas cylinder

    DOE PAGES

    Morgan, B. E.; Greenough, J. A.

    2015-04-08

    Two-dimensional numerical simulations of the Richtmyer–Meshkov unstable “shock-jet” problem are conducted using both large-eddy simulation (LES) and unsteady Reynolds-averaged Navier–Stokes (URANS) approaches in an arbitrary Lagrangian–Eulerian hydrodynamics code. Turbulence statistics are extracted from LES by running an ensemble of simulations with multimode perturbations to the initial conditions. Detailed grid convergence studies are conducted, and LES results are found to agree well with both experiment and high-order simulations conducted by Shankar et al. (Phys Fluids 23, 024102, 2011). URANS results using a k–L approach are found to be highly sensitive to initialization of the turbulence lengthscale L and to the timemore » at which L becomes resolved on the computational mesh. As a result, it is observed that a gradient diffusion closure for turbulent species flux is a poor approximation at early times, and a new closure based on the mass-flux velocity is proposed for low-Reynolds-number mixing.« less

  5. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  6. Using the Large Fire Simulator System to map wildland fire potential for the conterminous United States

    Treesearch

    LaWen Hollingsworth; James Menakis

    2010-01-01

    This project mapped wildland fire potential (WFP) for the conterminous United States by using the large fire simulation system developed for Fire Program Analysis (FPA) System. The large fire simulation system, referred to here as LFSim, consists of modules for weather generation, fire occurrence, fire suppression, and fire growth modeling. Weather was generated with...

  7. Numerical studies of various Néel-VBS transitions in SU(N) anti-ferromagnets

    NASA Astrophysics Data System (ADS)

    Kaul, Ribhu K.; Block, Matthew S.

    2015-09-01

    In this manuscript we review recent developments in the numerical simulations of bipartite SU(N) spin models by quantum Monte Carlo (QMC) methods. We provide an account of a large family of newly discovered sign-problem free spin models which can be simulated in their ground states on large lattices, containing O(105) spins, using the stochastic series expansion method with efficient loop algorithms. One of the most important applications so far of these Hamiltonians are to unbiased studies of quantum criticality between Neel and valence bond phases in two dimensions - a summary of this body of work is provided. The article concludes with an overview of the current status of and outlook for future studies of the “designer” Hamiltonians.

  8. Impact characteristics for high-pressure large-flow water-based emulsion pilot operated check valve reverse opening

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Huang, Chuanhui; Yu, Ping; Zhang, Lei

    2017-10-01

    To improve the dynamic characteristics and cavitation characteristics of large-flow pilot operated check valve, consider the pilot poppet as the research object, analyses working principle and design three different kinds of pilot poppets. The vibration characteristics and impact characteristics are analyzed. The simulation model is established through flow field simulation software. The cavitation characteristics of large-flow pilot operated check valve are studied and discussed. On this basis, high-pressure large-flow impact experimental system is used for impact experiment, and the cavitation index is discussed. Then optimal structure is obtained. Simulation results indicate that the increase of pilot poppet half cone angle can effectively reduce the cavitation area, reducing the generation of cavitation. Experimental results show that the pressure impact is not decreasing with increasing of pilot poppet half cone angle in process of unloading, but the unloading capacity, response speed and pilot poppet half cone angle are positively correlated. The impact characteristics of 60° pilot poppet, and its cavitation index is lesser, which indicates 60° pilot poppet is the optimal structure, with the theory results are basically identical.

  9. Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The principal objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. A summary of work accomplished during the last six months is presented.

  10. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  11. Simulating Forest Carbon Dynamics in Response to Large-scale Fuel Reduction Treatments Under Projected Climate-fire Interactions in the Sierra Nevada Mountains, USA

    NASA Astrophysics Data System (ADS)

    Liang, S.; Hurteau, M. D.

    2016-12-01

    The interaction of warmer, drier climate and increasing large wildfires, coupled with increasing fire severity resulting from fire-exclusion are anticipated to undermine forest carbon (C) stock stability and C sink strength in the Sierra Nevada forests. Treatments, including thinning and prescribed burning, to reduce biomass and restore forest structure have proven effective at reducing fire severity and lessening C loss when treated stands are burned by wildfire. However, the current pace and scale of treatment implementation is limited, especially given recent increases in area burned by wildfire. In this study, we used a forest landscape model (LANDIS-II) to evaluate the role of implementation timing of large-scale fuel reduction treatments in influencing forest C stock and fluxes of Sierra Nevada forests with projected climate and larger wildfires. We ran 90-year simulations using climate and wildfire projections from three general circulation models driven by the A2 emission scenario. We simulated two different treatment implementation scenarios: a `distributed' (treatments implemented throughout the simulation) and an `accelerated' (treatments implemented during the first half century) scenario. We found that across the study area, accelerated implementation had 0.6-10.4 Mg ha-1 higher late-century aboveground biomass (AGB) and 1.0-2.2 g C m-2 yr-1 higher mean C sink strength than the distributed scenario, depending on specific climate-wildfire projections. Cumulative wildfire emissions over the simulation period were 0.7-3.9 Mg C ha-1 higher for distributed implementation relative to accelerated implementation. However, simulations with both implementation practices have considerably higher AGB and C sink strength as well as lower wildfire emission than simulations in the absence of fuel reduction treatments. The results demonstrate the potential for implementing large-scale fuel reduction treatments to enhance forest C stock stability and C sink strength under projected climate-wildfire interactions. Given climate and wildfire would become more stressful since the mid-century, a forward management action would grant us more C benefits.

  12. Simulator for concurrent processing data flow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.

    1992-01-01

    A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.

  13. Optimization of submerged depth of surface aerators for a carrousel oxidation ditch based on large eddy simulation with Smagorinsky model.

    PubMed

    Wei, Wenli; Bai, Yu; Liu, Yuling

    2016-01-01

    This paper is concerned with the simulation and experimental study of hydraulic characteristics in a pilot Carrousel oxidation ditch for the optimization of submerged depth ratio of surface aerators. The simulation was based on the large eddy simulation with the Smagorinsky model, and the velocity was monitored in the ditches with an acoustic Doppler velocimeter method. Comparisons of the simulated velocities and experimental ones show a good agreement, which validates that the accuracy of this simulation is good. The best submerged depth ratio of 2/3 for surface aerators was obtained according to the analysis of the flow field structure, the ratio of gas and liquid in the bottom layer of a ditch, the average velocity of mixture and the flow region with a velocity easily causing sludge deposition under the four operation conditions with submerged depth ratios of 1/3, 1/2, 2/3 and 3/4 for surface aerators. The research result can provide a reference for the design of Carrousel oxidation ditches.

  14. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    NASA Astrophysics Data System (ADS)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  15. Software Engineering for Scientific Computer Simulations

    NASA Astrophysics Data System (ADS)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  16. Morphological changes in polycrystalline Fe after compression and release

    NASA Astrophysics Data System (ADS)

    Gunkelmann, Nina; Tramontina, Diego R.; Bringa, Eduardo M.; Urbassek, Herbert M.

    2015-02-01

    Despite a number of large-scale molecular dynamics simulations of shock compressed iron, the morphological properties of simulated recovered samples are still unexplored. Key questions remain open in this area, including the role of dislocation motion and deformation twinning in shear stress release. In this study, we present simulations of homogeneous uniaxial compression and recovery of large polycrystalline iron samples. Our results reveal significant recovery of the body-centered cubic grains with some deformation twinning driven by shear stress, in agreement with experimental results by Wang et al. [Sci. Rep. 3, 1086 (2013)]. The twin fraction agrees reasonably well with a semi-analytical model which assumes a critical shear stress for twinning. On reloading, twins disappear and the material reaches a very low strength value.

  17. Large eddy simulation of hydrodynamic cavitation

    NASA Astrophysics Data System (ADS)

    Bhatt, Mrugank; Mahesh, Krishnan

    2017-11-01

    Large eddy simulation is used to study sheet to cloud cavitation over a wedge. The mixture of water and water vapor is represented using a homogeneous mixture model. Compressible Navier-Stokes equations for mixture quantities along with transport equation for vapor mass fraction employing finite rate mass transfer between the two phases, are solved using the numerical method of Gnanaskandan and Mahesh. The method is implemented on unstructured grid with parallel MPI capabilities. Flow over a wedge is simulated at Re = 200 , 000 and the performance of the homogeneous mixture model is analyzed in predicting different regimes of sheet to cloud cavitation; namely, incipient, transitory and periodic, as observed in the experimental investigation of Harish et al.. This work is supported by the Office of Naval Research.

  18. The morphodynamics and sedimentology of large river confluences

    NASA Astrophysics Data System (ADS)

    Nicholas, Andrew; Sambrook Smith, Greg; Best, James; Bull, Jon; Dixon, Simon; Goodbred, Steven; Sarker, Mamin; Vardy, Mark

    2017-04-01

    Confluences are key locations within large river networks, yet surprisingly little is known about how they migrate and evolve through time. Moreover, because confluence sites are associated with scour pools that are typically several times the mean channel depth, the deposits associated with such scours should have a high potential for preservation within the rock record. However, paradoxically, such scours are rarely observed, and the sedimentological characteristics of such deposits are poorly understood. This study reports results from a physically-based morphodynamic model, which is applied to simulate the evolution and resulting alluvial architecture associated with large river junctions. Boundary conditions within the model simulation are defined to approximate the junction of the Ganges and Jamuna rivers, in Bangladesh. Model results are supplemented by geophysical datasets collected during boat-based surveys at this junction. Simulated deposit characteristics and geophysical datasets are compared with three existing and contrasting conceptual models that have been proposed to represent the sedimentary architecture of confluence scours. Results illustrate that existing conceptual models may be overly simplistic, although elements of each of the three conceptual models are evident in the deposits generated by the numerical simulation. The latter are characterised by several distinct styles of sedimentary fill, which can be linked to particular morphodynamic behaviours. However, the preserved characteristics of simulated confluence deposits vary substantial according to the degree of reworking by channel migration. This may go some way towards explaining the confluence scour paradox; while abundant large scours might be expected in the rock record, they are rarely reported.

  19. Optimal control of energy extraction in LES of large wind farms

    NASA Astrophysics Data System (ADS)

    Meyers, Johan; Goit, Jay; Munters, Wim

    2014-11-01

    We investigate the use of optimal control combined with Large-Eddy Simulations (LES) of wind-farm boundary layer interaction for the increase of total energy extraction in very large ``infinite'' wind farms and in finite farms. We consider the individual wind turbines as flow actuators, whose energy extraction can be dynamically regulated in time so as to optimally influence the turbulent flow field, maximizing the wind farm power. For the simulation of wind-farm boundary layers we use large-eddy simulations in combination with an actuator-disk representation of wind turbines. Simulations are performed in our in-house pseudo-spectral code SP-Wind. For the optimal control study, we consider the dynamic control of turbine-thrust coefficients in the actuator-disk model. They represent the effect of turbine blades that can actively pitch in time, changing the lift- and drag coefficients of the turbine blades. In a first infinite wind-farm case, we find that farm power is increases by approximately 16% over one hour of operation. This comes at the cost of a deceleration of the outer layer of the boundary layer. A detailed analysis of energy balances is presented, and a comparison is made between infinite and finite farm cases, for which boundary layer entrainment plays an import role. The authors acknowledge support from the European Research Council (FP7-Ideas, Grant No. 306471). Simulations were performed on the computing infrastructure of the VSC Flemish Supercomputer Center, funded by the Hercules Foundation and the Flemish Govern.

  20. Simulations of the formation of large-scale structure

    NASA Astrophysics Data System (ADS)

    White, S. D. M.

    Numerical studies related to the simulation of structure growth are examined. The linear development of fluctuations in the early universe is studied. The research of Aarseth, Gott, and Turner (1979) based on N-body integrators that obtained particle accelerations by direct summation of the forces due to other objects is discussed. Consideration is given to the 'pancake theory' of Zel'dovich (1970) for the evolution from adiabatic initial fluctuation, the neutrino-dominated universe models of White, Frenk, and Davis (1983), and the simulations of Davis et al. (1985).

  1. Use of High-Resolution Satellite Observations to Evaluate Cloud and Precipitation Statistics from Cloud-Resolving Model Simulations

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Tao, W.; Hou, A. Y.; Zeng, X.; Shie, C.

    2007-12-01

    The cloud and precipitation statistics simulated by 3D Goddard Cumulus Ensemble (GCE) model for different environmental conditions, i.e., the South China Sea Monsoon Experiment (SCSMEX), CRYSTAL-FACE, and KAWJEX are compared with Tropical Rainfall Measuring Mission (TRMM) TMI and PR rainfall measurements and as well as cloud observations from the Earth's Radiant Energy System (CERES) and the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments. It is found that GCE is capable of simulating major convective system development and reproducing total surface rainfall amount as compared with rainfall estimated from the soundings. The model presents large discrepancies in rain spectrum and vertical hydrometer profiles. The discrepancy in the precipitation field is also consistent with the cloud and radiation observations. The study will focus on the effects of large scale forcing and microphysics to the simulated model- observation discrepancies.

  2. Large Eddy Simulations of Severe Convection Induced Turbulence

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.

  3. Density Weighted FDF Equations for Simulations of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2011-01-01

    In this report, we briefly revisit the formulation of density weighted filtered density function (DW-FDF) for large eddy simulation (LES) of turbulent reacting flows, which was proposed by Jaberi et al. (Jaberi, F.A., Colucci, P.J., James, S., Givi, P. and Pope, S.B., Filtered mass density function for Large-eddy simulation of turbulent reacting flows, J. Fluid Mech., vol. 401, pp. 85-121, 1999). At first, we proceed the traditional derivation of the DW-FDF equations by using the fine grained probability density function (FG-PDF), then we explore another way of constructing the DW-FDF equations by starting directly from the compressible Navier-Stokes equations. We observe that the terms which are unclosed in the traditional DW-FDF equations are now closed in the newly constructed DW-FDF equations. This significant difference and its practical impact on the computational simulations may deserve further studies.

  4. Expanding Regional Airport Usage to Accommodate Increased Air Traffic Demand

    NASA Technical Reports Server (NTRS)

    Russell, Carl R.

    2009-01-01

    Small regional airports present an underutilized source of capacity in the national air transportation system. This study sought to determine whether a 50 percent increase in national operations could be achieved by limiting demand growth at large hub airports and instead growing traffic levels at the surrounding regional airports. This demand scenario for future air traffic in the United States was generated and used as input to a 24-hour simulation of the national airspace system. Results of the demand generation process and metrics predicting the simulation results are presented, in addition to the actual simulation results. The demand generation process showed that sufficient runway capacity exists at regional airports to offload a significant portion of traffic from hub airports. Predictive metrics forecast a large reduction of delays at most major airports when demand is shifted. The simulation results then show that offloading hub traffic can significantly reduce nationwide delays.

  5. Reconstructing a Large-Scale Population for Social Simulation

    NASA Astrophysics Data System (ADS)

    Fan, Zongchen; Meng, Rongqing; Ge, Yuanzheng; Qiu, Xiaogang

    The advent of social simulation has provided an opportunity to research on social systems. More and more researchers tend to describe the components of social systems in a more detailed level. Any simulation needs the support of population data to initialize and implement the simulation systems. However, it's impossible to get the data which provide full information about individuals and households. We propose a two-step method to reconstruct a large-scale population for a Chinese city according to Chinese culture. Firstly, a baseline population is generated through gathering individuals into households one by one; secondly, social relationships such as friendship are assigned to the baseline population. Through a case study, a population of 3,112,559 individuals gathered in 1,133,835 households is reconstructed for Urumqi city, and the results show that the generated data can respect the real data quite well. The generated data can be applied to support modeling of some social phenomenon.

  6. Automated Knowledge Discovery from Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.

    2006-01-01

    In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.

  7. Large-eddy simulation of flow past a circular cylinder

    NASA Technical Reports Server (NTRS)

    Mittal, R.

    1995-01-01

    Some of the most challenging applications of large-eddy simulation are those in complex geometries where spectral methods are of limited use. For such applications more conventional methods such as finite difference or finite element have to be used. However, it has become clear in recent years that dissipative numerical schemes which are routinely used in viscous flow simulations are not good candidates for use in LES of turbulent flows. Except in cases where the flow is extremely well resolved, it has been found that upwind schemes tend to damp out a significant portion of the small scales that can be resolved on the grid. Furthermore, it has been found that even specially designed higher-order upwind schemes that have been used successfully in the direct numerical simulation of turbulent flows produce too much dissipation when used in conjunction with large-eddy simulation. The objective of the current study is to perform a LES of incompressible flow past a circular cylinder at a Reynolds number of 3900 using a solver which employs an energy-conservative second-order central difference scheme for spatial discretization and compare the results obtained with those of Beaudan & Moin (1994) and with the experiments in order to assess the performance of the central scheme for this relatively complex geometry.

  8. Estimation of pollutant loads considering dam operation in Han River Basin by BASINS/Hydrological Simulation Program-FORTRAN.

    PubMed

    Jung, Kwang-Wook; Yoon, Choon-G; Jang, Jae-Ho; Kong, Dong-Soo

    2008-01-01

    Effective watershed management often demands qualitative and quantitative predictions of the effect of future management activities as arguments for policy makers and administration. The BASINS geographic information system was developed to compute total maximum daily loads, which are helpful to establish hydrological process and water quality modeling system. In this paper the BASINS toolkit HSPF model is applied in 20,271 km(2) large watershed of the Han River Basin is used for applicability of HSPF and BMPs scenarios. For proper evaluation of watershed and stream water quality, comprehensive estimation methods are necessary to assess large amounts of point source and nonpoint-source (NPS) pollution based on the total watershed area. In this study, The Hydrological Simulation Program-FORTRAN (HSPF) was estimated to simulate watershed pollutant loads containing dam operation and applied BMPs scenarios for control NPS pollution. The 8-day monitoring data (about three years) were used in the calibration and verification processes. Model performance was in the range of "very good" and "good" based on percent difference. The water-quality simulation results were encouraging for this large sizable watershed with dam operation practice and mixed land uses; HSPF proved adequate, and its application is recommended to simulate watershed processes and BMPs evaluation. IWA Publishing 2008.

  9. Projected Future Vegetation Changes for the Northwest United States and Southwest Canada at a Fine Spatial Resolution Using a Dynamic Global Vegetation Model.

    PubMed

    Shafer, Sarah L; Bartlein, Patrick J; Gray, Elizabeth M; Pelltier, Richard T

    2015-01-01

    Future climate change may significantly alter the distributions of many plant taxa. The effects of climate change may be particularly large in mountainous regions where climate can vary significantly with elevation. Understanding potential future vegetation changes in these regions requires methods that can resolve vegetation responses to climate change at fine spatial resolutions. We used LPJ, a dynamic global vegetation model, to assess potential future vegetation changes for a large topographically complex area of the northwest United States and southwest Canada (38.0-58.0°N latitude by 136.6-103.0°W longitude). LPJ is a process-based vegetation model that mechanistically simulates the effect of changing climate and atmospheric CO2 concentrations on vegetation. It was developed and has been mostly applied at spatial resolutions of 10-minutes or coarser. In this study, we used LPJ at a 30-second (~1-km) spatial resolution to simulate potential vegetation changes for 2070-2099. LPJ was run using downscaled future climate simulations from five coupled atmosphere-ocean general circulation models (CCSM3, CGCM3.1(T47), GISS-ER, MIROC3.2(medres), UKMO-HadCM3) produced using the A2 greenhouse gases emissions scenario. Under projected future climate and atmospheric CO2 concentrations, the simulated vegetation changes result in the contraction of alpine, shrub-steppe, and xeric shrub vegetation across the study area and the expansion of woodland and forest vegetation. Large areas of maritime cool forest and cold forest are simulated to persist under projected future conditions. The fine spatial-scale vegetation simulations resolve patterns of vegetation change that are not visible at coarser resolutions and these fine-scale patterns are particularly important for understanding potential future vegetation changes in topographically complex areas.

  10. Large-eddy simulation of dust-uplift by a haboob density current

    NASA Astrophysics Data System (ADS)

    Huang, Qian; Marsham, John H.; Tian, Wenshou; Parker, Douglas J.; Garcia-Carreras, Luis

    2018-04-01

    Cold pool outflows have been shown from both observations and convection-permitting models to be a dominant source of dust emissions ("haboobs") in the summertime Sahel and Sahara, and to cause dust uplift over deserts across the world. In this paper Met Office Large Eddy Model (LEM) simulations, which resolve the turbulence within the cold-pools much better than previous studies of haboobs with convection-permitting models, are used to investigate the winds that uplift dust in cold pools, and the resultant dust transport. In order to simulate the cold pool outflow, an idealized cooling is added in the model during the first 2 h of 5.7 h run time. Given the short duration of the runs, dust is treated as a passive tracer. Dust uplift largely occurs in the "head" of the density current, consistent with the few existing observations. In the modeled density current dust is largely restricted to the lowest, coldest and well mixed layers of the cold pool outflow (below around 400 m), except above the "head" of the cold pool where some dust reaches 2.5 km. This rapid transport to above 2 km will contribute to long atmospheric lifetimes of large dust particles from haboobs. Decreasing the model horizontal grid-spacing from 1.0 km to 100 m resolves more turbulence, locally increasing winds, increasing mixing and reducing the propagation speed of the density current. Total accumulated dust uplift is approximately twice as large in 1.0 km runs compared with 100 m runs, suggesting that for studying haboobs in convection-permitting runs the representation of turbulence and mixing is significant. Simulations with surface sensible heat fluxes representative of those from a desert region during daytime show that increasing surface fluxes slows the density current due to increased mixing, but increase dust uplift rates, due to increased downward transport of momentum to the surface.

  11. Systematic simulations of modified gravity: chameleon models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference withmore » the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.« less

  12. Simulation Of Seawater Intrusion With 2D And 3D Models: Nauru Island Case Study

    NASA Astrophysics Data System (ADS)

    Ghassemi, F.; Jakeman, A. J.; Jacobson, G.; Howard, K. W. F.

    1996-03-01

    With the advent of large computing capacities during the past few decades, sophisticated models have been developed for the simulation of seawater intrusion in coastal and island aquifers. Currently, several models are commercially available for the simulation of this problem. This paper describes the mathematical basis and application of the SUTRA and HST3D models to simulate seawater intrusion in Nauru Island, in the central Pacific Ocean. A comparison of the performance and limitations of these two models in simulating a real problem indicates that three-dimensional simulation of seawater intrusion with the HST3D model has the major advantage of being able to specify natural boundary conditions as well as pumping stresses. However, HST3D requires a small grid size and short time steps in order to maintain numerical stability and accuracy. These requirements lead to solution of a large set of linear equations that requires the availability of powerful computing facilities in terms of memory and computing speed. Combined results of the two simulation models indicate a safe pumping rate of 400 m3/d for the aquifer on Nauru Island, where additional fresh water is presently needed for the rehabilitation of mined-out land.

  13. Scale Dependence of Land Atmosphere Interactions in Wet and Dry Regions as Simulated with NU-WRF over the Southwestern and Southeast US

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Wu, Di; Lau, K.- M.; Tao, Wei-Kuo

    2016-01-01

    Large-scale forcing and land-atmosphere interactions on precipitation are investigated with NASA-Unified WRF (NU-WRF) simulations during fast transitions of ENSO phases from spring to early summer of 2010 and 2011. The model is found to capture major precipitation episodes in the 3-month simulations without resorting to nudging. However, the mean intensity of the simulated precipitation is underestimated by 46% and 57% compared with the observations in dry and wet regions in the southwestern and south-central United States, respectively. Sensitivity studies show that large-scale atmospheric forcing plays a major role in producing regional precipitation. A methodology to account for moisture contributions to individual precipitation events, as well as total precipitation, is presented under the same moisture budget framework. The analysis shows that the relative contributions of local evaporation and large-scale moisture convergence depend on the dry/wet regions and are a function of temporal and spatial scales. While the ratio of local and large-scale moisture contributions vary with domain size and weather system, evaporation provides a major moisture source in the dry region and during light rain events, which leads to greater sensitivity to soil moisture in the dry region and during light rain events. The feedback of land surface processes to large-scale forcing is well simulated, as indicated by changes in atmospheric circulation and moisture convergence. Overall, the results reveal an asymmetrical response of precipitation events to soil moisture, with higher sensitivity under dry than wet conditions. Drier soil moisture tends to suppress further existing below-normal precipitation conditions via a positive soil moisture-land surface flux feedback that could worsen drought conditions in the southwestern United States.

  14. Computer simulations of ions in radio-frequency traps

    NASA Technical Reports Server (NTRS)

    Williams, A.; Prestage, J. D.; Maleki, L.; Djomehri, J.; Harabetian, E.

    1990-01-01

    The motion of ions in a trapped-ion frequency standard affects the stability of the standard. In order to study the motion and structures of large ion clouds in a radio-frequency (RF) trap, a computer simulation of the system that incorporates the effect of thermal excitation of the ions was developed. Results are presented from the simulation for cloud sizes up to 512 ions, emphasizing cloud structures in the low-temperature regime.

  15. Detached-Eddy Simulations of Attached and Detached Boundary Layers

    NASA Astrophysics Data System (ADS)

    Caruelle, B.; Ducros, F.

    2003-12-01

    This article presents Detached-Eddy Simulations (DESs) of attached and detached turbulent boundary layers. This hybrid Reynolds Averaged Navier-Stokes (RANS) / Large Eddy Simulation (LES) model goes continuously from RANS to LES according to the mesh definition. We propose a parametric study of the model over two "academic" configurations, in order to get information on the influence of the mesh to correctly treat complex flow with attached and detached boundary layers.

  16. Cannibalism, Kuru, and Mad Cows: Prion Disease As a "Choose-Your-Own-Experiment" Case Study to Simulate Scientific Inquiry in Large Lectures.

    PubMed

    Serrano, Antonio; Liebner, Jeffrey; Hines, Justin K

    2016-01-01

    Despite significant efforts to reform undergraduate science education, students often perform worse on assessments of perceptions of science after introductory courses, demonstrating a need for new educational interventions to reverse this trend. To address this need, we created An Inexplicable Disease, an engaging, active-learning case study that is unusual because it aims to simulate scientific inquiry by allowing students to iteratively investigate the Kuru epidemic of 1957 in a choose-your-own-experiment format in large lectures. The case emphasizes the importance of specialization and communication in science and is broadly applicable to courses of any size and sub-discipline of the life sciences.

  17. Wave climate and trends along the eastern Chukchi Arctic Alaska coast

    USGS Publications Warehouse

    Erikson, L.H.; Storlazzi, C.D.; Jensen, R.E.

    2011-01-01

    Due in large part to the difficulty of obtaining measurements in the Arctic, little is known about the wave climate along the coast of Arctic Alaska. In this study, numerical model simulations encompassing 40 years of wave hind-casts were used to assess mean and extreme wave conditions. Results indicate that the wave climate was strongly modulated by large-scale atmospheric circulation patterns and that mean and extreme wave heights and periods exhibited increasing trends in both the sea and swell frequency bands over the time-period studied (1954-2004). Model simulations also indicate that the upward trend was not due to a decrease in the minimum icepack extent. ?? 2011 ASCE.

  18. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    NASA Astrophysics Data System (ADS)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  19. Process and Learning Outcomes from Remotely-Operated, Simulated, and Hands-on Student Laboratories

    ERIC Educational Resources Information Center

    Corter, James E.; Esche, Sven K.; Chassapis, Constantin; Ma, Jing; Nickerson, Jeffrey V.

    2011-01-01

    A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course. Students (N = 458) worked in small-group lab teams to perform two experiments involving stress on a cantilever beam. Each team conducted the…

  20. Spatial application of WEPS for estimating wind erosion in the Pacific Northwest

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on cropland and was originally designed to run simulations on a field-scale size. This study extended WEPS to run on multiple fields (grids) independently to cover a large region and to conduct an initial investigation to ass...

  1. The Use of Constructive Modeling and Virtual Simulation in Large-Scale Team Training: A Military Case Study.

    ERIC Educational Resources Information Center

    Andrews, Dee H.; Dineen, Toni; Bell, Herbert H.

    1999-01-01

    Discusses the use of constructive modeling and virtual simulation in team training; describes a military application of constructive modeling, including technology issues and communication protocols; considers possible improvements; and discusses applications in team-learning environments other than military, including industry and education. (LRW)

  2. It's a Girl! Random Numbers, Simulations, and the Law of Large Numbers

    ERIC Educational Resources Information Center

    Goodwin, Chris; Ortiz, Enrique

    2015-01-01

    Modeling using mathematics and making inferences about mathematical situations are becoming more prevalent in most fields of study. Descriptive statistics cannot be used to generalize about a population or make predictions of what can occur. Instead, inference must be used. Simulation and sampling are essential in building a foundation for…

  3. Evaluation of the Community Multiscale Air Quality Model for Simulating Winter Ozone Formation in the Uinta Basin.

    EPA Science Inventory

    The Weather Research and Forecasting (WRF) and Community Multiscale Air Quality (CMAQ) models were used to simulate a 10 day high‐ozone episode observed during the 2013 Uinta Basin Winter Ozone Study (UBWOS). The baseline model had a large negative bias when compared to ozo...

  4. Exploring the Ability of a Coarse-grained Potential to Describe the Stress-strain Response of Glassy Polystyrene

    DTIC Science & Technology

    2012-10-01

    using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS

  5. Hybrid stochastic simulations of intracellular reaction-diffusion systems.

    PubMed

    Kalantzis, Georgios

    2009-06-01

    With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.

  6. Simulation and optimization study of a solar seasonal storage district heating system: the Fox River Valley case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michaels, A.I.; Sillman, S.; Baylin, F.

    1983-05-01

    A central solar-heating plant with seasonal heat storage in a deep underground aquifer is designed by means of a solar-seasonal-storage-system simulation code based on the Solar Energy Research Institute (SERI) code for Solar Annual Storage Simulation (SASS). This Solar Seasonal Storage Plant is designed to supply close to 100% of the annual heating and domestic-hot-water (DHW) load of a hypothetical new community, the Fox River Valley Project, for a location in Madison, Wisconsin. Some analyses are also carried out for Boston, Massachusetts and Copenhagen, Denmark, as an indication of weather and insolation effects. Analyses are conducted for five different typesmore » of solar collectors, and for an alternate system utilizing seasonal storage in a large water tank. Predicted seasonal performance and system and storage costs are calculated. To provide some validation of the SASS results, a simulation of the solar system with seasonal storage in a large water tank is also carried out with a modified version of the Swedish Solar Seasonal Storage Code MINSUN.« less

  7. Quantitative computational infrared imaging of buoyant diffusion flames

    NASA Astrophysics Data System (ADS)

    Newale, Ashish S.

    Studies of infrared radiation from turbulent buoyant diffusion flames impinging on structural elements have applications to the development of fire models. A numerical and experimental study of radiation from buoyant diffusion flames with and without impingement on a flat plate is reported. Quantitative images of the radiation intensity from the flames are acquired using a high speed infrared camera. Large eddy simulations are performed using fire dynamics simulator (FDS version 6). The species concentrations and temperature from the simulations are used in conjunction with a narrow-band radiation model (RADCAL) to solve the radiative transfer equation. The computed infrared radiation intensities rendered in the form of images and compared with the measurements. The measured and computed radiation intensities reveal necking and bulging with a characteristic frequency of 7.1 Hz which is in agreement with previous empirical correlations. The results demonstrate the effects of stagnation point boundary layer on the upstream buoyant shear layer. The coupling between these two shear layers presents a model problem for sub-grid scale modeling necessary for future large eddy simulations.

  8. Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the “Large p, Small n” Setting

    PubMed Central

    Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666

  9. Visualization of the Eastern Renewable Generation Integration Study: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron

    The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less

  10. Evolution of Precipitation Structure During the November DYNAMO MJO Event: Cloud-Resolving Model Intercomparison and Cross Validation Using Radar Observations

    NASA Astrophysics Data System (ADS)

    Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong

    2018-04-01

    Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.

  11. Bridging the scales in atmospheric composition simulations using a nudging technique

    NASA Astrophysics Data System (ADS)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.

  12. Computer-generated forces in distributed interactive simulation

    NASA Astrophysics Data System (ADS)

    Petty, Mikel D.

    1995-04-01

    Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.

  13. Numerical study of dynamo action at low magnetic Prandtl numbers.

    PubMed

    Ponty, Y; Mininni, P D; Montgomery, D C; Pinton, J-F; Politano, H; Pouquet, A

    2005-04-29

    We present a three-pronged numerical approach to the dynamo problem at low magnetic Prandtl numbers P(M). The difficulty of resolving a large range of scales is circumvented by combining direct numerical simulations, a Lagrangian-averaged model and large-eddy simulations. The flow is generated by the Taylor-Green forcing; it combines a well defined structure at large scales and turbulent fluctuations at small scales. Our main findings are (i) dynamos are observed from P(M)=1 down to P(M)=10(-2), (ii) the critical magnetic Reynolds number increases sharply with P(M)(-1) as turbulence sets in and then it saturates, and (iii) in the linear growth phase, unstable magnetic modes move to smaller scales as P(M) is decreased. Then the dynamo grows at large scales and modifies the turbulent velocity fluctuations.

  14. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  15. Stochastic locality and master-field simulations of very large lattices

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2018-03-01

    In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.

  16. Molecular Dynamics Simulations of the Temperature Induced Unfolding of Crambin Follow the Arrhenius Equation.

    PubMed

    Dalby, Andrew; Shamsir, Mohd Shahir

    2015-01-01

    Molecular dynamics simulations have been used extensively to model the folding and unfolding of proteins. The rates of folding and unfolding should follow the Arrhenius equation over a limited range of temperatures. This study shows that molecular dynamic simulations of the unfolding of crambin between 500K and 560K do follow the Arrhenius equation. They also show that while there is a large amount of variation between the simulations the average values for the rate show a very high degree of correlation.

  17. Molecular Dynamics Simulations of the Temperature Induced Unfolding of Crambin Follow the Arrhenius Equation.

    PubMed Central

    Dalby, Andrew; Shamsir, Mohd Shahir

    2015-01-01

    Molecular dynamics simulations have been used extensively to model the folding and unfolding of proteins. The rates of folding and unfolding should follow the Arrhenius equation over a limited range of temperatures. This study shows that molecular dynamic simulations of the unfolding of crambin between 500K and 560K do follow the Arrhenius equation. They also show that while there is a large amount of variation between the simulations the average values for the rate show a very high degree of correlation. PMID:26539292

  18. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  19. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories

    NASA Astrophysics Data System (ADS)

    Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  20. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.

    PubMed

    Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  1. Error-growth dynamics and predictability of surface thermally induced atmospheric flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, X.; Pielke, R.A.

    1993-09-01

    Using the CSU Regional Atmospheric Modeling System (RAMS) in its nonhydrostatic and compressible configuration, over 200 two-dimensional simulations with [Delta]x = 2 km and [Delta]x = 100 m are performed to study in detail the initial adjustment process and the error-growth dynamics of surface thermally induced circulation including the sensitivity to initial conditions, boundary conditions, and model parameters, and to study the predictability as a function of the size of surface heat patches under a calm mean wind. It is found that the error growth is not sensitive to the characterisitics of the initial perturbations. The numerical smoothing has amore » strong impact on the initial adjustment process and on the error-growth dynamics. The predictability and flow structures, it is found that the vertical velocity field is strongly affected by the mean wind, and the flow structures are quite sensitive to the initial soil water content. The transition from organized flow to the situation in which fluxes are dominated by noncoherent turbulent eddies under a calm mean wind is quantitatively evaluated and this transition is different for different variables. The relationship between the predictability of a realization and of an ensemble average is discussed. The predictability and the coherent circulations modulated by the surface inhomogeneities are also studied by computing the autocorrelations and the power spectra. The three-dimensional mesoscale and large-eddy simulations are performed to verify the above results. It is found that the two-dimensional mesoscale (or fine resolution) simulation yields very close or similar results regarding the predictability as those from the three-dimensional mesoscale (or large eddy) simulation. The horizontally averaged quantities based on two-dimensional fine-resolution simulations are insensitive to initial perturbations and agree with those based on three-dimensional large-eddy simulations. 87 refs., 25 figs.« less

  2. Conceptual study of the damping of large space structures using large-stroke adaptive stiffness cables

    NASA Technical Reports Server (NTRS)

    Thorwald, Gregory; Mikulas, Martin M., Jr.

    1992-01-01

    The concept of a large-stroke adaptive stiffness cable-device for damping control of space structures with large mass is introduced. The cable is used to provide damping in several examples, and its performance is shown through numerical simulation results. Displacement and velocity information of how the structure moves is used to determine when to modify the cable's stiffness in order to provide a damping force.

  3. Large Eddy Simulations using oodlesDST

    DTIC Science & Technology

    2016-01-01

    Research Agency DST-Group-TR-3205 ABSTRACT The oodlesDST code is based on OpenFOAM software and performs Large Eddy Simulations of......maritime platforms using a variety of simulation techniques. He is currently using OpenFOAM software to perform both Reynolds Averaged Navier-Stokes

  4. Study on the temperature field of large-sized sapphire single crystal furnace

    NASA Astrophysics Data System (ADS)

    Zhai, J. P.; Jiang, J. W.; Liu, K. G.; Peng, X. B.; Jian, D. L.; Li, I. L.

    2018-01-01

    In this paper, the temperature field of large-sized (120kg, 200kg and 300kg grade) sapphire single crystal furnace was simulated. By keeping the crucible diameter ratio and the insulation system unchanged, the power consumption, axial and radial temperature gradient, solid-liquid surface shape, stress distribution and melt flow were studied. The simulation results showed that with the increase of the single crystal furnace size, the power consumption increased, the temperature field insulation effect became worse, the growth stress value increased and the stress concentration phenomenon occurred. To solve these problems, the middle and bottom insulation system should be enhanced during designing the large-sized sapphire single crystal furnace. The appropriate radial and axial temperature gradient was favorable to reduce the crystal stress and prevent the occurrence of cracking. Expanding the interface between the seed and crystal was propitious to avoid the stress accumulation phenomenon.

  5. Analysis of orbital perturbations acting on objects in orbits near geosynchronous earth orbit

    NASA Technical Reports Server (NTRS)

    Friesen, Larry J.; Jackson, Albert A., IV; Zook, Herbert A.; Kessler, Donald J.

    1992-01-01

    The paper presents a numerical investigation of orbital evolution for objects started in GEO or in orbits near GEO in order to study potential orbital debris problems in this region. Perturbations simulated include nonspherical terms in the earth's geopotential field, lunar and solar gravity, and solar radiation pressure. Objects simulated include large satellites, for which solar radiation pressure is insignificant, and small particles, for which solar radiation pressure is an important force. Results for large satellites are largely in agreement with previous GEO studies that used classical perturbation techniques. The orbit plane of GEO satellites placed in a stable plane orbit inclined approximately 7.3 deg to the equator experience very little precession, remaining always within 1.2 percent of their initial orientation. Solar radiation pressure generates two major effects on small particles: an orbital eccentricity oscillation anticipated from previous research, and an oscillation in orbital inclination.

  6. Quantum Fragment Based ab Initio Molecular Dynamics for Proteins.

    PubMed

    Liu, Jinfeng; Zhu, Tong; Wang, Xianwei; He, Xiao; Zhang, John Z H

    2015-12-08

    Developing ab initio molecular dynamics (AIMD) methods for practical application in protein dynamics is of significant interest. Due to the large size of biomolecules, applying standard quantum chemical methods to compute energies for dynamic simulation is computationally prohibitive. In this work, a fragment based ab initio molecular dynamics approach is presented for practical application in protein dynamics study. In this approach, the energy and forces of the protein are calculated by a recently developed electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method. For simulation in explicit solvent, mechanical embedding is introduced to treat protein interaction with explicit water molecules. This AIMD approach has been applied to MD simulations of a small benchmark protein Trpcage (with 20 residues and 304 atoms) in both the gas phase and in solution. Comparison to the simulation result using the AMBER force field shows that the AIMD gives a more stable protein structure in the simulation, indicating that quantum chemical energy is more reliable. Importantly, the present fragment-based AIMD simulation captures quantum effects including electrostatic polarization and charge transfer that are missing in standard classical MD simulations. The current approach is linear-scaling, trivially parallel, and applicable to performing the AIMD simulation of proteins with a large size.

  7. The Role of Fluid Compression in Particle Energization during Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Li, X.; Guo, F.; Li, H.; Li, S.

    2017-12-01

    Theories of particle transport and acceleration have shown that fluid compression is the leading mechanism for particle energization. However, the role of compression in particle energization during magnetic reconnection is unclear. We present a cluster of studies to clarify and show the effect of fluid compression in accelerating particles to high energies during magnetic reconnection. Using fully kinetic reconnection simulations, we show that fluid compression is the leading mechanism for high-energy particle energization. We find that the compressional energization is more important in a low-beta plasma or in a reconnection layer with a weak guide field (the magnetic field component perpendicular to the reconnecting magnetic field), which are relevant to solar flares. Our analysis on 3D kinetic simulations shows that the self-generated turbulence scatters particles and enhances the particle diffusion processes in the acceleration regions. Based on these results, we then study large-scale reconnection acceleration by solving the particle transport equation in a large-scale reconnection layer evolved with MHD simulations. Due to the compressional effect, particles are accelerated to high energies and develop power-law energy distributions. This study clarifies the nature of particle acceleration in reconnection layer and is important to understand particle energization during large-scale acceleration such as solar flares.

  8. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M D; Cole, S; Frenk, C S

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less

  9. The Effect of Yaw Coupling in Turning Maneuvers of Large Transport Aircraft

    NASA Technical Reports Server (NTRS)

    McNeill, Walter E.; Innis, Robert C.

    1965-01-01

    A study has been made, using a piloted moving simulator, of the effects of the yaw-coupling parameters N(sub p) and N(sub delta(sub a) on the lateral-directional handling qualities of a large transport airplane at landing-approach airspeed. It is shown that the desirable combinations of these parameters tend to be more proverse when compared with values typical of current aircraft. Results of flight tests in a large variable-stability jet transport showed trends which were similar to those of the simulator data. Areas of minor disagreement, which were traced to differences in airplane geometry, indicate that pilot consciousness of side acceleration forces can be an important factor in handling qualities of future long-nosed transport aircraft.

  10. Handling Qualities of Large Rotorcraft in Hover and Low Speed

    NASA Technical Reports Server (NTRS)

    Malpica, Carlos; Theodore, Colin R.; Lawrence , Ben; Blanken, Chris L.

    2015-01-01

    According to a number of system studies, large capacity advanced rotorcraft with a capability of high cruise speeds (approx.350 mph) as well as vertical and/or short take-off and landing (V/STOL) flight could alleviate anticipated air transportation capacity issues by making use of non-primary runways, taxiways, and aprons. These advanced aircraft pose a number of design challenges, as well as unknown issues in the flight control and handling qualities domains. A series of piloted simulation experiments have been conducted on the NASA Ames Research Center Vertical Motion Simulator (VMS) in recent years to systematically investigate the fundamental flight control and handling qualities issues associated with the characteristics of large rotorcraft, including tiltrotors, in hover and low-speed maneuvering.

  11. Thermo-Mechanical Analyses of Dynamically Loaded Rubber Cylinders

    NASA Technical Reports Server (NTRS)

    Johnson, Arthur R.; Chen, Tzi-Kang

    2002-01-01

    Thick rubber components are employed by the Army to carry large loads. In tanks, rubber covers road wheels and track systems to protect roadways. It is difficult for design engineers to simulate the details of the hysteretic heating for large strain viscoelastic deformations. In this study, an approximation to the viscoelastic energy dissipated per unit time is investigated for use in estimating mechanically induced viscoelastic heating. Coupled thermo-mechanical simulations of large cyclic deformations of rubber cylinders are presented. The cylinders are first compressed axially and then cyclically loaded about the compressed state. Details of the algorithm and some computational issues are discussed. The coupled analyses are conducted for tall and short rubber cylinders both with and without imbedded metal disks.

  12. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  13. Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent Reactive Mixtures

    DTIC Science & Technology

    2012-03-27

    pulse- detonation engines ( PDE ), stage separation, supersonic cav- ity oscillations, hypersonic aerodynamics, detonation induced structural...ADAPTIVE UNSTRUCTURED CARTESIAN METHOD FOR LARGE-EDDY SIMULATION OF DETONATION IN MULTI-PHASE TURBULENT REACTIVE MIXTURES 5b. GRANT NUMBER FA9550...CCL Report TR-2012-03-03 Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent

  14. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  15. Hazard assessment of long-period ground motions for the Nankai Trough earthquakes

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.

    2013-12-01

    We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.

  16. Incorporation of Carrier Phase Global Positioning System Measurements into the Navigation Reference System for Improved Performance

    DTIC Science & Technology

    1993-12-01

    5-6 5.6.1 Large Cycle Slip Simulation ............................. 5-7 5.6.2 Small Cycle Slip Simulation ........................... 5-9...Appendix J. Small Cycle Slip Simulation Results ............................. J-1 Bibliography ........................................................ BIB-I...when subjected to large and small cycle slips. Results of the simulations indicate that the PNRS can provide an improved navigation solution over

  17. Simulation of long-term landscape-level fuel treatment effects on large wildfires

    Treesearch

    Mark A. Finney; Rob C. Seli; Charles W. McHugh; Alan A. Ager; Bernhard Bahro; James K. Agee

    2008-01-01

    A simulation system was developed to explore how fuel treatments placed in topologically random and optimal spatial patterns affect the growth and behaviour of large fires when implemented at different rates over the course of five decades. The system consisted of a forest and fuel dynamics simulation module (Forest Vegetation Simulator, FVS), logic for deriving fuel...

  18. A Study of Airline Passenger Susceptibility to Atmospheric Turbulence Hazard

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2000-01-01

    A simple, generic, simulation math model of a commercial airliner has been developed to study the susceptibility of unrestrained passengers to large, discrete gust encounters. The math model simulates the longitudinal motion to vertical gusts and includes (1) motion of an unrestrained passenger in the rear cabin, (2) fuselage flexibility, (3) the lag in the downwash from the wing to the tail, and (4) unsteady lift effects. Airplane and passenger response contours are calculated for a matrix of gust amplitudes and gust lengths of a simulated mountain rotor. A comparison of the model-predicted responses to data from three accidents indicates that the accelerations in actual accidents are sometimes much larger than the simulated gust encounters.

  19. Exploring Binding Properties of Agonists Interacting with a δ-Opioid Receptor

    PubMed Central

    Collu, Francesca; Ceccarelli, Matteo; Ruggerone, Paolo

    2012-01-01

    Ligand-receptor interactions are at the basis of the mediation of our physiological responses to a large variety of ligands, such as hormones, neurotransmitters and environmental stimulants, and their tuning represents the goal of a large variety of therapies. Several molecular details of these interactions are still largely unknown. In an effort to shed some light on this important issue, we performed a computational study on the interaction of two related compounds differing by a single methyl group (clozapine and desmethylclozapine) with a -opioid receptor. According to experiments, desmethylclozapine is more active than clozapine, providing a system well suited for a comparative study. We investigated stable configurations of the two drugs inside the receptor by simulating their escape routes by molecular dynamics simulations. Our results point out that the action of the compounds might be related to the spatial and temporal distribution of the affinity sites they visit during their permanency. Moreover, no particularly pronounced structural perturbations of the receptor were detected during the simulations, reinforcing the idea of a strong dynamical character of the interaction process, with an important role played by the solvent in addition. PMID:23300729

  20. Challenges in first-principles NPT molecular dynamics of soft porous crystals: A case study on MIL-53(Ga)

    NASA Astrophysics Data System (ADS)

    Haigis, Volker; Belkhodja, Yacine; Coudert, François-Xavier; Vuilleumier, Rodolphe; Boutin, Anne

    2014-08-01

    Soft porous crystals present a challenge to molecular dynamics simulations with flexible size and shape of the simulation cell (i.e., in the NPT ensemble), since their framework responds very sensitively to small external stimuli. Hence, all interactions have to be described very accurately in order to obtain correct equilibrium structures. Here, we report a methodological study on the nanoporous metal-organic framework MIL-53(Ga), which undergoes a large-amplitude transition between a narrow- and a large-pore phase upon a change in temperature. Since this system has not been investigated by density functional theory (DFT)-based NPT simulations so far, we carefully check the convergence of the stress tensor with respect to computational parameters. Furthermore, we demonstrate the importance of dispersion interactions and test two different ways of incorporating them into the DFT framework. As a result, we propose two computational schemes which describe accurately the narrow- and the large-pore phase of the material, respectively. These schemes can be used in future work on the delicate interplay between adsorption in the nanopores and structural flexibility of the host material.

  1. Density-functional theory simulation of large quantum dots

    NASA Astrophysics Data System (ADS)

    Jiang, Hong; Baranger, Harold U.; Yang, Weitao

    2003-10-01

    Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.

  2. Near Stall Flow Analysis in the Transonic Fan of the RTA Propulsion System

    NASA Technical Reports Server (NTRS)

    Hah, Chunill

    2010-01-01

    Turbine-based propulsion systems for access to space have been investigated at NASA Glenn Research center. A ground demonstrator engine for validation testing has been developed as a part of the program. The demonstrator, the Revolutionary Turbine Accelerator (RTA-1), is a variable cycle turbofan ramjet designed to transition from an augmented turbofan to a ramjet that produces the thrust required to accelerate the vehicle to Mach 4. The RTA-1 is designed to accommodate a large variation in bypass ratio from sea level static to Mach 4 flight condition. A key component of this engine is a new fan stage that accommodates these large variations in bypass ratio and flow ranges. In the present study, unsteady flow behavior in the fan of the RTA-1 is studied in detail with large eddy simulation (LES) and the numerical results are compared with measured data. During the experimental study of the fan stage, humming sound was detected at 100 % speed near stall operation. The main purpose of the study is to investigate details of the unsteady flow behavior at near stall operation and to identify a possible cause of the hum. The large eddy simulation of the current flow field reproduces main features of the measured flow very well. The LES simulation indicates that non-synchronous flow instability develops as the fan operates toward the stall limit. The FFT analysis of the calculated wall pressure shows that the rotating flow instability has the characteristic frequency that is about 50% of the blade passing frequency.

  3. Estimation of Graded Response Model Parameters Using MULTILOG.

    ERIC Educational Resources Information Center

    Baker, Frank B.

    1997-01-01

    Describes an idiosyncracy of the MULTILOG (D. Thissen, 1991) parameter estimation process discovered during a simulation study involving the graded response model. A misordering reflected in boundary function location parameter estimates resulted in a large negative contribution to the true score followed by a large positive contribution. These…

  4. Simulated Response of Mercury and Nitrogen to Land Management and Land Use Change in a Large River Basin

    EPA Science Inventory

    Increases in nitrogen cascading from headwater systems to coastal waterways and bioaccumulation of mercury in aquatic ecosystems have become primary environmental concerns in recent decades. Studies assessing the effects of land use or climate change on water quality in large ri...

  5. Applications of large-eddy simulation: Synthesis of neutral boundary layer models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohmstede, W.D.

    The object of this report is to describe progress made towards the application of large-eddy simulation (LES), in particular, to the study of the neutral boundary layer (NBL). The broad purpose of the study is to provide support to the LES project currently underway at LLNL. The specific purpose of this study is to lay the groundwork for the simulation of the SBL through the establishment and implementation of model criteria for the simulation of the NBL. The idealistic NBL is never observed in the atmosphere and therefore has little practical significance. However, it is of considerable theoretical interest formore » several reasons. The report discusses the concept of Rossby-number similarity theory as it applies to the NBL. A particular implementation of the concept is described. Then, the results from prior simulations of the NBL are summarized. Model design criteria for two versions of the Brost LES (BLES) model are discussed. The general guidelines for the development of Version 1 of the Brost model (BV1) were to implement the model with a minimum of modifications which would alter the design criteria as established by Brost. Two major modifications of BLES incorporated into BV1 pertain to the initialization/parameterization of the model and the generalization of the boundary conditions at the air/earth interface. 18 refs., 4 figs.« less

  6. Evolution of Precipitation Extremes in Three Large Ensembles of Climate Simulations - Impact of Spatial and Temporal Resolutions

    NASA Astrophysics Data System (ADS)

    Martel, J. L.; Brissette, F.; Mailhot, A.; Wood, R. R.; Ludwig, R.; Frigon, A.; Leduc, M.; Turcotte, R.

    2017-12-01

    Recent studies indicate that the frequency and intensity of extreme precipitation will increase in future climate due to global warming. In this study, we compare annual maxima precipitation series from three large ensembles of climate simulations at various spatial and temporal resolutions. The first two are at the global scale: the Canadian Earth System Model (CanESM2) 50-member large ensemble (CanESM2-LE) at a 2.8° resolution and the Community Earth System Model (CESM1) 40-member large ensemble (CESM1-LE) at a 1° resolution. The third ensemble is at the regional scale over both Eastern North America and Europe: the Canadian Regional Climate Model (CRCM5) 50-member large ensemble (CRCM5-LE) at a 0.11° resolution, driven at its boundaries by the CanESM-LE. The CRCM5-LE is a new ensemble issued from the ClimEx project (http://www.climex-project.org), a Québec-Bavaria collaboration. Using these three large ensembles, change in extreme precipitations over the historical (1980-2010) and future (2070-2100) periods are investigated. This results in 1 500 (30 years x 50 members for CanESM2-LE and CRCM5-LE) and 1200 (30 years x 40 members for CESM1-LE) simulated years over both the historical and future periods. Using these large datasets, the empirical daily (and sub-daily for CRCM5-LE) extreme precipitation quantiles for large return periods ranging from 2 to 100 years are computed. Results indicate that daily extreme precipitations generally will increase over most land grid points of both domains according to the three large ensembles. Regarding the CRCM5-LE, the increase in sub-daily extreme precipitations will be even more important than the one observed for daily extreme precipitations. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety.

  7. Determining the Influence of Granule Size on Simulation Parameters and Residual Shear Stress Distribution in Tablets by Combining the Finite Element Method into the Design of Experiments.

    PubMed

    Hayashi, Yoshihiro; Kosugi, Atsushi; Miura, Takahiro; Takayama, Kozo; Onuki, Yoshinori

    2018-01-01

    The influence of granule size on simulation parameters and residual shear stress in tablets was determined by combining the finite element method (FEM) into the design of experiments (DoE). Lactose granules were prepared using a wet granulation method with a high-shear mixer and sorted into small and large granules using sieves. To simulate the tableting process using the FEM, parameters simulating each granule were optimized using a DoE and a response surface method (RSM). The compaction behavior of each granule simulated by FEM was in reasonable agreement with the experimental findings. Higher coefficients of friction between powder and die/punch (μ) and lower by internal friction angle (α y ) were generated in the case of small granules, respectively. RSM revealed that die wall force was affected by α y . On the other hand, the pressure transmissibility rate of punches value was affected not only by the α y value, but also by μ. The FEM revealed that the residual shear stress was greater for small granules than for large granules. These results suggest that the inner structure of a tablet comprising small granules was less homogeneous than that comprising large granules. To evaluate the contribution of the simulation parameters to residual stress, these parameters were assigned to the fractional factorial design and an ANOVA was applied. The result indicated that μ was the critical factor influencing residual shear stress. This study demonstrates the importance of combining simulation and statistical analysis to gain a deeper understanding of the tableting process.

  8. Large-scale lattice-Boltzmann simulations over lambda networks

    NASA Astrophysics Data System (ADS)

    Saksena, R.; Coveney, P. V.; Pinning, R.; Booth, S.

    Amphiphilic molecules are of immense industrial importance, mainly due to their tendency to align at interfaces in a solution of immiscible species, e.g., oil and water, thereby reducing surface tension. Depending on the concentration of amphiphiles in the solution, they may assemble into a variety of morphologies, such as lamellae, micelles, sponge and cubic bicontinuous structures exhibiting non-trivial rheological properties. The main objective of this work is to study the rheological properties of very large, defect-containing gyroidal systems (of up to 10243 lattice sites) using the lattice-Boltzmann method. Memory requirements for the simulation of such large lattices exceed that available to us on most supercomputers and so we use MPICH-G2/MPIg to investigate geographically distributed domain decomposition simulations across HPCx in the UK and TeraGrid in the US. Use of MPICH-G2/MPIg requires the port-forwarder to work with the grid middleware on HPCx. Data from the simulations is streamed to a high performance visualisation resource at UCL (London) for rendering and visualisation. Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project March 26-28 2007 The George Hotel, Edinburgh, UK

  9. Global Simulations of Dynamo and Magnetorotational Instability in Madison Plasma Experiments and Astrophysical Disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebrahimi, Fatima

    2014-07-31

    Large-scale magnetic fields have been observed in widely different types of astrophysical objects. These magnetic fields are believed to be caused by the so-called dynamo effect. Could a large-scale magnetic field grow out of turbulence (i.e. the alpha dynamo effect)? How could the topological properties and the complexity of magnetic field as a global quantity, the so called magnetic helicity, be important in the dynamo effect? In addition to understanding the dynamo mechanism in astrophysical accretion disks, anomalous angular momentum transport has also been a longstanding problem in accretion disks and laboratory plasmas. To investigate both dynamo and momentum transport,more » we have performed both numerical modeling of laboratory experiments that are intended to simulate nature and modeling of configurations with direct relevance to astrophysical disks. Our simulations use fluid approximations (Magnetohydrodynamics - MHD model), where plasma is treated as a single fluid, or two fluids, in the presence of electromagnetic forces. Our major physics objective is to study the possibility of magnetic field generation (so called MRI small-scale and large-scale dynamos) and its role in Magneto-rotational Instability (MRI) saturation through nonlinear simulations in both MHD and Hall regimes.« less

  10. Gyrokinetic GDC turbulence simulations: confirming a new instability regime in LAPD plasmas

    NASA Astrophysics Data System (ADS)

    Pueschel, M. J.; Rossi, G.; Told, D.; Terry, P. W.; Jenko, F.; Carter, T. A.

    2016-10-01

    Recent high-beta experiments at the LArge Plasma Device have found significant parallel magnetic fluctuations in the region of large pressure gradients. Linear gyrokinetic simulations show the dominant instability at these radii to be the gradient-driven drift coupling (GDC) mode, a non-textbook mode driven by pressure gradients and destabilized by the coupling of ExB and grad-B∥ drifts. Unlike in previous studies, the large parallel extent of the device allows for finite-kz versions of this instability in addition to kz = 0 . The locations of maximum linear growth match very well with experimentally observed peaks of B∥ fluctuations. Local nonlinear simulations reproduce many features of the observations fairly well, with the exception of Bperp fluctuations, for which experimental profiles suggest a source unrelated to pressure gradients. In toto, the results presented here show that turbulence and transport in these experiments are driven by the GDC instability, that important characteristics of the linear instability carry over to nonlinear simulations, and - in the context of validation - that the gyrokinetic framework performs surprisingly well far outside its typical area of application, increasing confidence in its predictive abilities. Supported by U.S. DOE.

  11. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    NASA Astrophysics Data System (ADS)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  12. Frictional behavior of large displacement experimental faults

    USGS Publications Warehouse

    Beeler, N.M.; Tullis, T.E.; Blanpied, M.L.; Weeks, J.D.

    1996-01-01

    The coefficient of friction and velocity dependence of friction of initially bare surfaces and 1-mm-thick simulated fault gouges (400 mm at 25??C and 25 MPa normal stress. Steady state negative friction velocity dependence and a steady state fault zone microstructure are achieved after ???18 mm displacement, and an approximately constant strength is reached after a few tens of millimeters of sliding on initially bare surfaces. Simulated fault gouges show a large but systematic variation of friction, velocity dependence of friction, dilatancy, and degree of localization with displacement. At short displacement (<10 mm), simulated gouge is strong, velocity strengthening and changes in sliding velocity are accompanied by relatively large changes in dilatancy rate. With continued displacement, simulated gouges become progressively weaker and less velocity strengthening, the velocity dependence of dilatancy rate decreases, and deformation becomes localized into a narrow basal shear which at its most localized is observed to be velocity weakening. With subsequent displacement, the fault restrengthens, returns to velocity strengthening, or to velocity neutral, the velocity dependence of dilatancy rate becomes larger, and deformation becomes distributed. Correlation of friction, velocity dependence of friction and of dilatancy rate, and degree of localization at all displacements in simulated gouge suggest that all quantities are interrelated. The observations do not distinguish the independent variables but suggest that the degree of localization is controlled by the fault strength, not by the friction velocity dependence. The friction velocity dependence and velocity dependence of dilatancy rate can be used as qualitative measures of the degree of localization in simulated gouge, in agreement with previous studies. Theory equating the friction velocity dependence of simulated gouge to the sum of the friction velocity dependence of bare surfaces and the velocity dependence of dilatancy rate of simulated gouge fails to quantitatively account for the experimental observations.

  13. A numerical study of the effects of a large sandbar upon sea breeze development

    NASA Technical Reports Server (NTRS)

    Kessler, R. C.; Pielke, R. A.; Mcqueen, J.; Eppel, D.

    1985-01-01

    Two-dimensional numerical simulations of sea breeze development over a large sandbar on the North Sea coast of Germany are reported. The numerical model used in these experiments contains a detailed treatment of soil moisture, which allows evaluation of the effects of differential surface characteristics on the airflow pattern. Results of the simulations indicate that the contrast between the moist sandbar and adjacent dry land, the tidal inundation of the sandbar, and the westward penetration of the Baltic sea breeze play important roles in the development of mesoscale airflow patterns in the sandbar region.

  14. Multiblock High Order Large Eddy Simulation of Powered Fontan Hemodynamics: Towards Computational Surgery

    PubMed Central

    Delorme, Yann T.; Rodefeld, Mark D.; Frankel, Steven H.

    2016-01-01

    Children born with only one functional ventricle must typically undergo a series of three surgeries to obtain the so-called Fontan circulation in which the blood coming from the body passively flows from the Vena Cavae (VCs) to the Pulmonary Arteries (PAs) through the Total Cavopulmonary Connection (TCPC). The circulation is inherently inefficient due to the lack of a subpulmonary ventricle. Survivors face the risk of circulatory sequelae and eventual failure for the duration of their lives. Current efforts are focused on improving the outcomes of Fontan palliation, either passively by optimizing the TCPC, or actively by using mechanical support. We are working on a chronic implant that would be placed at the junction of the TCPC, and would provide the necessary pressure augmentation to re-establish a circulation that recapitulates a normal two-ventricle circulation. This implant is based on the Von Karman viscous pump and consists of a vaned impeller that rotates inside the TCPC. To evaluate the performance of such a device, and to study the flow features induced by the presence of the pump, Computational Fluid Dynamics (CFD) is used. CFD has become an important tool to understand hemodynamics owing to the possibility of simulating quickly a large number of designs and flow conditions without any harm for patients. The transitional and unsteady nature of the flow can make accurate simulations challenging. We developed and in-house high order Large Eddy Simulation (LES) solver coupled to a recent Immersed Boundary Method (IBM) to handle complex geometries. Multiblock capability is added to the solver to allow for efficient simulations of complex patient specific geometries. Blood simulations are performed in a complex patient specific TCPC geometry. In this study, simulations without mechanical assist are performed, as well as after virtual implantation of the temporary and chronic implants being developed. Instantaneous flow structures, hepatic factor distribution, and statistical data are presented for all three cases. PMID:28649147

  15. Large Eddy Simulation of Transitional Flow in an Idealized Stenotic Blood Vessel: Evaluation of Subgrid Scale Models

    PubMed Central

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A.; Frankel, Steven H.

    2014-01-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, “Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow,” J. Fluid Mech., 582, pp. 253–280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, “Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method,” J. Comput. Phys., 227(13), pp. 6660–6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, “General Circulation Experiments With the Primitive Equations,” Mon. Weather Rev., 91(10), pp. 99–164), recently developed Vreman model (Vreman, 2004, “An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications,” Phys. Fluids, 16(10), pp. 3670–3681), and the Sigma model (Nicoud et al., 2011, “Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations,” Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) (“OpenFOAM,” http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo. PMID:24801556

  16. Numerical simulation of seismic wave propagation from land-excited large volume air-gun source

    NASA Astrophysics Data System (ADS)

    Cao, W.; Zhang, W.

    2017-12-01

    The land-excited large volume air-gun source can be used to study regional underground structures and to detect temporal velocity changes. The air-gun source is characterized by rich low frequency energy (from bubble oscillation, 2-8Hz) and high repeatability. It can be excited in rivers, reservoirs or man-made pool. Numerical simulation of the seismic wave propagation from the air-gun source helps to understand the energy partitioning and characteristics of the waveform records at stations. However, the effective energy recorded at a distance station is from the process of bubble oscillation, which can not be approximated by a single point source. We propose a method to simulate the seismic wave propagation from the land-excited large volume air-gun source by finite difference method. The process can be divided into three parts: bubble oscillation and source coupling, solid-fluid coupling and the propagation in the solid medium. For the first part, the wavelet of the bubble oscillation can be simulated by bubble model. We use wave injection method combining the bubble wavelet with elastic wave equation to achieve the source coupling. Then, the solid-fluid boundary condition is implemented along the water bottom. And the last part is the seismic wave propagation in the solid medium, which can be readily implemented by the finite difference method. Our method can get accuracy waveform of land-excited large volume air-gun source. Based on the above forward modeling technology, we analysis the effect of the excited P wave and the energy of converted S wave due to different water shapes. We study two land-excited large volume air-gun fields, one is Binchuan in Yunnan, and the other is Hutubi in Xinjiang. The station in Binchuan, Yunnan is located in a large irregular reservoir, the waveform records have a clear S wave. Nevertheless, the station in Hutubi, Xinjiang is located in a small man-made pool, the waveform records have very weak S wave. Better understanding of the characteristics of land-excited large volume air-gun can help to better use of the air-gun source.

  17. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    NASA Astrophysics Data System (ADS)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    Hydrological and hydrodynamic models are core tools for simulation of large basins and complex river systems associated to wetlands. Recent studies have pointed towards the importance of online coupling strategies, representing feedbacks between floodplain inundation and vertical hydrology. Especially across semi-arid regions, soil-floodplain interactions can be strong. In this study, we included a two-way coupling scheme in a large scale hydrological-hydrodynamic model (MGB) and tested different model structures, in order to assess which processes are important to be simulated in large semi-arid wetlands and how these processes interact with water budget components. To demonstrate benefits from this coupling over a validation case, the model was applied to the Upper Niger River basin encompassing the Niger Inner Delta, a vast semi-arid wetland in the Sahel Desert. Simulation was carried out from 1999 to 2014 with daily TMPA 3B42 precipitation as forcing, using both in-situ and remotely sensed data for calibration and validation. Model outputs were in good agreement with discharge and water levels at stations both upstream and downstream of the Inner Delta (Nash-Sutcliffe Efficiency (NSE) >0.6 for most gauges), as well as for flooded areas within the Delta region (NSE = 0.6; r = 0.85). Model estimates of annual water losses across the Delta varied between 20.1 and 30.6 km3/yr, while annual evapotranspiration ranged between 760 mm/yr and 1130 mm/yr. Evaluation of model structure indicated that representation of both floodplain channels hydrodynamics (storage, bifurcations, lateral connections) and vertical hydrological processes (floodplain water infiltration into soil column; evapotranspiration from soil and vegetation and evaporation of open water) are necessary to correctly simulate flood wave attenuation and evapotranspiration along the basin. Two-way coupled models are necessary to better understand processes in large semi-arid wetlands. Finally, such coupled hydrologic and hydrodynamic modelling proves to be an important tool for integrated evaluation of hydrological processes in such poorly gauged, large scale basins. We hope that this model application provides new ways forward for large scale model development in such systems, involving semi-arid regions and complex floodplains.

  18. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  19. Self-Pinched Transport Theory for the SABRE Ion Diode

    NASA Astrophysics Data System (ADS)

    Welch, Dale R.; Olson, Craig L.; Hanson, David L.

    1997-05-01

    In anticipation of a 90 kA 4 MV SABRE ion diode experiment, we have been examining self-pinch transport of ions for application to ion-driven inertial confinement fusion. The Li^+3 beam will exit the diode with a 30-40 mradian divergence and a shallow focusing angle of 75 mradians. The beam is annular with an 4.6-cm inner radius and a 6.8-cm outer radius. Self-pinch theory and simulation predict that large residual currents are possible in 2-20 mtorr argon gas. The simulations suggest that ≈ 50 kA of Li particle current is necessary to contain the beam's transverse momentum. Some non-ideal effects include large beam divergence, large focusing angle and beam annularity. To address these problems, we have been studying the benefits of beam conditioning in the focus region between the diode and the self pinch region after the beam has reached a small radius. We have found some benefit from including a passive conical structure and a low-pressure gas. A significant lens effect can be attained using only the beam fields in vacuum or a low pressure gas. In this configuration, a large focusing force, that keeps the ions off an inner cone and outer wall as the beam converges, has been calculated using the numerical simulation code uc(iprop.) Results from integrated simulation of the condition cell and self-pinch region look encouraging.

  20. Idealized modeling of convective organization with changing sea surface temperatures using multiple equilibria in weak temperature gradient simulations

    NASA Astrophysics Data System (ADS)

    Sentić, Stipo; Sessions, Sharon L.

    2017-06-01

    The weak temperature gradient (WTG) approximation is a method of parameterizing the influences of the large scale on local convection in limited domain simulations. WTG simulations exhibit multiple equilibria in precipitation; depending on the initial moisture content, simulations can precipitate or remain dry for otherwise identical boundary conditions. We use a hypothesized analogy between multiple equilibria in precipitation in WTG simulations, and dry and moist regions of organized convection to study tropical convective organization. We find that the range of wind speeds that support multiple equilibria depends on sea surface temperature (SST). Compared to the present SST, low SSTs support a narrower range of multiple equilibria at higher wind speeds. In contrast, high SSTs exhibit a narrower range of multiple equilibria at low wind speeds. This suggests that at high SSTs, organized convection might occur with lower surface forcing. To characterize convection at different SSTs, we analyze the change in relationships between precipitation rate, atmospheric stability, moisture content, and the large-scale transport of moist entropy and moisture with increasing SSTs. We find an increase in large-scale export of moisture and moist entropy from dry simulations with increasing SST, which is consistent with a strengthening of the up-gradient transport of moisture from dry regions to moist regions in organized convection. Furthermore, the changes in diagnostic relationships with SST are consistent with more intense convection in precipitating regions of organized convection for higher SSTs.

  1. Escorting commercial aircraft to reduce the MANPAD threat

    NASA Astrophysics Data System (ADS)

    Hock, Nicholas; Richardson, M. A.; Butters, B.; Walmsley, R.; Ayling, R.; Taylor, B.

    2005-11-01

    This paper studies the Man-Portable Air Defence System (MANPADS) threat against large commercial aircraft using flight profile analysis, engagement modelling and simulation. Non-countermeasure equipped commercial aircraft are at risk during approach and departure due to the large areas around airports that would need to be secured to prevent the use of highly portable and concealable MANPADs. A software model (CounterSim) has been developed and was used to simulate an SA-7b and large commercial aircraft engagement. The results of this simulation have found that the threat was lessened when a escort fighter aircraft is flown in the 'Centreline Low' position, or 25 m rearward from the large aircraft and 15 m lower, similar to the Air-to-Air refuelling position. In the model a large aircraft on approach had a 50% chance of being hit or having a near miss (within 20m) whereas escorted by a countermeasure equipped F-16 in the 'Centerline Low' position, this was reduced to only 14%. Departure is a particularly vulnerable time for large aircraft due to slow climb rates and the inability to fly evasive manoeuvres. The 'Centreline Low' escorted departure greatly reduced the threat to 16% hit or near miss from 62% for an unescorted heavy aircraft. Overall the CounterSim modelling has showed that escorting a civilian aircraft on approach and departure can reduce the MANPAD threat by 3 to 4 times.

  2. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia

    This paper describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLO formore » structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation« less

  3. Effect of Current Electricity Simulation Supported Learning on the Conceptual Understanding of Elementary and Secondary Teachers

    ERIC Educational Resources Information Center

    Kumar, David Devraj; Thomas, P. V.; Morris, John D.; Tobias, Karen M.; Baker, Mary; Jermanovich, Trudy

    2011-01-01

    This study examined the impact of computer simulation and supported science learning on a teacher's understanding and conceptual knowledge of current electricity. Pre/Post tests were used to measure the teachers' concept attainment. Overall, there was a significant and large knowledge difference effect from Pre to Post test. Two interesting…

  4. Important parameters for smoke plume rise simulation with Daysmoke

    Treesearch

    L. Liu; G.L. Achtemeier; S.L. Goodrick; W. Jackson

    2010-01-01

    Daysmoke is a local smoke transport model and has been used to provide smoke plume rise information. It includes a large number of parameters describing the dynamic and stochastic processes of particle upward movement, fallout, fluctuation, and burn emissions. This study identifies the important parameters for Daysmoke simulations of plume rise and seeks to understand...

  5. IQ and the Death Penalty: Verifying Mental Retardation.

    ERIC Educational Resources Information Center

    Keyes, Denis William

    Whether or not subjects can simulate mental retardation, a consideration that has implications in criminal cases, was studied using 21 adult Caucasian males between 20 and 30 years of age, largely comprised of students and staff employees of the University of New Mexico. Subjects were asked to give genuine and simulated responses to two major test…

  6. 4P: fast computing of population genetics statistics from large DNA polymorphism panels

    PubMed Central

    Benazzo, Andrea; Panziera, Alex; Bertorelle, Giorgio

    2015-01-01

    Massive DNA sequencing has significantly increased the amount of data available for population genetics and molecular ecology studies. However, the parallel computation of simple statistics within and between populations from large panels of polymorphic sites is not yet available, making the exploratory analyses of a set or subset of data a very laborious task. Here, we present 4P (parallel processing of polymorphism panels), a stand-alone software program for the rapid computation of genetic variation statistics (including the joint frequency spectrum) from millions of DNA variants in multiple individuals and multiple populations. It handles a standard input file format commonly used to store DNA variation from empirical or simulation experiments. The computational performance of 4P was evaluated using large SNP (single nucleotide polymorphism) datasets from human genomes or obtained by simulations. 4P was faster or much faster than other comparable programs, and the impact of parallel computing using multicore computers or servers was evident. 4P is a useful tool for biologists who need a simple and rapid computer program to run exploratory population genetics analyses in large panels of genomic data. It is also particularly suitable to analyze multiple data sets produced in simulation studies. Unix, Windows, and MacOs versions are provided, as well as the source code for easier pipeline implementations. PMID:25628874

  7. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure assumptions.

  8. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  9. Mean-state acceleration of cloud-resolving models and large eddy simulations

    DOE PAGES

    Jones, C. R.; Bretherton, C. S.; Pritchard, M. S.

    2015-10-29

    In this study, large eddy simulations and cloud-resolving models (CRMs) are routinely used to simulate boundary layer and deep convective cloud processes, aid in the development of moist physical parameterization for global models, study cloud-climate feedbacks and cloud-aerosol interaction, and as the heart of superparameterized climate models. These models are computationally demanding, placing practical constraints on their use in these applications, especially for long, climate-relevant simulations. In many situations, the horizontal-mean atmospheric structure evolves slowly compared to the turnover time of the most energetic turbulent eddies. We develop a simple scheme to reduce this time scale separation to accelerate themore » evolution of the mean state. Using this approach we are able to accelerate the model evolution by a factor of 2–16 or more in idealized stratocumulus, shallow and deep cumulus convection without substantial loss of accuracy in simulating mean cloud statistics and their sensitivity to climate change perturbations. As a culminating test, we apply this technique to accelerate the embedded CRMs in the Superparameterized Community Atmosphere Model by a factor of 2, thereby showing that the method is robust and stable to realistic perturbations across spatial and temporal scales typical in a GCM.« less

  10. Precursor Wave Emission Enhanced by Weibel Instability in Relativistic Shocks

    NASA Astrophysics Data System (ADS)

    Iwamoto, Masanori; Amano, Takanobu; Hoshino, Masahiro; Matsumoto, Yosuke

    2018-05-01

    We investigated the precursor wave emission efficiency in magnetized purely perpendicular relativistic shocks in pair plasmas. We extended our previous study to include the dependence of upstream magnetic field orientations. We performed two-dimensional particle-in-cell simulations and focused on two magnetic field orientations: the magnetic field in the simulation plane (i.e., in-plane configuration) and that perpendicular to the simulation plane (i.e., out-of-plane configuration). Our simulations in the in-plane configuration demonstrated that not only extraordinary but also ordinary mode waves are excited. We quantified the emission efficiency as a function of the magnetization parameter σ e and found that the large-amplitude precursor waves are emitted for a wide range of σ e . We found that especially at low σ e , the magnetic field generated by Weibel instability amplifies the ordinary mode wave power. The amplitude is large enough to perturb the upstream plasma, and transverse density filaments are generated as in the case of the out-of-plane configuration investigated in the previous study. We confirmed that our previous conclusion holds regardless of upstream magnetic field orientations with respect to the two-dimensional simulation plane. We discuss the precursor wave emission in three dimensions and the feasibility of wakefield acceleration in relativistic shocks based on our results.

  11. Hysteroscopic sterilization using a virtual reality simulator: assessment of learning curve.

    PubMed

    Janse, Juliënne A; Goedegebuure, Ruben S A; Veersema, Sebastiaan; Broekmans, Frank J M; Schreuder, Henk W R

    2013-01-01

    To assess the learning curve using a virtual reality simulator for hysteroscopic sterilization with the Essure method. Prospective multicenter study (Canadian Task Force classification II-2). University and teaching hospital in the Netherlands. Thirty novices (medical students) and five experts (gynecologists who had performed >150 Essure sterilization procedures). All participants performed nine repetitions of bilateral Essure placement on the simulator. Novices returned after 2 weeks and performed a second series of five repetitions to assess retention of skills. Structured observations on performance using the Global Rating Scale and parameters derived from the simulator provided measurements for analysis. The learning curve is represented by improvement per procedure. Two-way repeated-measures analysis of variance was used to analyze learning curves. Effect size (ES) was calculated to express the practical significance of the results (ES ≥ 0.50 indicates a large learning effect). For all parameters, significant improvements were found in novice performance within nine repetitions. Large learning effects were established for six of eight parameters (p < .001; ES, 0.50-0.96). Novices approached expert level within 9 to 14 repetitions. The learning curve established in this study endorses future implementation of the simulator in curricula on hysteroscopic skill acquisition for clinicians who are interested in learning this sterilization technique. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.

  12. Large-Scale Covariability Between Aerosol and Precipitation Over the 7-SEAS Region: Observations and Simulations

    NASA Technical Reports Server (NTRS)

    Huang, Jingfeng; Hsu, N. Christina; Tsay, Si-Chee; Zhang, Chidong; Jeong, Myeong Jae; Gautam, Ritesh; Bettenhausen, Corey; Sayer, Andrew M.; Hansell, Richard A.; Liu, Xiaohong; hide

    2012-01-01

    One of the seven scientific areas of interests of the 7-SEAS field campaign is to evaluate the impact of aerosol on cloud and precipitation (http://7-seas.gsfc.nasa.gov). However, large-scale covariability between aerosol, cloud and precipitation is complicated not only by ambient environment and a variety of aerosol effects, but also by effects from rain washout and climate factors. This study characterizes large-scale aerosol-cloud-precipitation covariability through synergy of long-term multi ]sensor satellite observations with model simulations over the 7-SEAS region [10S-30N, 95E-130E]. Results show that climate factors such as ENSO significantly modulate aerosol and precipitation over the region simultaneously. After removal of climate factor effects, aerosol and precipitation are significantly anti-correlated over the southern part of the region, where high aerosols loading is associated with overall reduced total precipitation with intensified rain rates and decreased rain frequency, decreased tropospheric latent heating, suppressed cloud top height and increased outgoing longwave radiation, enhanced clear-sky shortwave TOA flux but reduced all-sky shortwave TOA flux in deep convective regimes; but such covariability becomes less notable over the northern counterpart of the region where low ]level stratus are found. Using CO as a proxy of biomass burning aerosols to minimize the washout effect, large-scale covariability between CO and precipitation was also investigated and similar large-scale covariability observed. Model simulations with NCAR CAM5 were found to show similar effects to observations in the spatio-temporal patterns. Results from both observations and simulations are valuable for improving our understanding of this region's meteorological system and the roles of aerosol within it. Key words: aerosol; precipitation; large-scale covariability; aerosol effects; washout; climate factors; 7- SEAS; CO; CAM5

  13. Relative solvation free energies calculated using an ab initio QM/MM-based free energy perturbation method: dependence of results on simulation length.

    PubMed

    Reddy, M Rami; Erion, Mark D

    2009-12-01

    Molecular dynamics (MD) simulations in conjunction with thermodynamic perturbation approach was used to calculate relative solvation free energies of five pairs of small molecules, namely; (1) methanol to ethane, (2) acetone to acetamide, (3) phenol to benzene, (4) 1,1,1 trichloroethane to ethane, and (5) phenylalanine to isoleucine. Two studies were performed to evaluate the dependence of the convergence of these calculations on MD simulation length and starting configuration. In the first study, each transformation started from the same well-equilibrated configuration and the simulation length was varied from 230 to 2,540 ps. The results indicated that for transformations involving small structural changes, a simulation length of 860 ps is sufficient to obtain satisfactory convergence. In contrast, transformations involving relatively large structural changes, such as phenylalanine to isoleucine, require a significantly longer simulation length (>2,540 ps) to obtain satisfactory convergence. In the second study, the transformation was completed starting from three different configurations and using in each case 860 ps of MD simulation. The results from this study suggest that performing one long simulation may be better than averaging results from three different simulations using a shorter simulation length and three different starting configurations.

  14. Visualizing staggered fields and analyzing electromagnetic data with PerceptEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shasharina, Svetlana

    This project resulted in VSimSP: a software for simulating large photonic devices of high-performance computers. It includes: GUI for Photonics Simulations; High-Performance Meshing Algorithm; 2d Order Multimaterials Algorithm; Mode Solver for Waveguides; 2d Order Material Dispersion Algorithm; S Parameters Calculation; High-Performance Workflow at NERSC ; and Large Photonic Devices Simulation Setups We believe we became the only company in the world which can simulate large photonics devices in 3D on modern supercomputers without the need to split them into subparts or do low-fidelity modeling. We started commercial engagement with a manufacturing company.

  15. Preparing for Large-Force Exercises with Distributed Simulation: A Panel Presentation

    DTIC Science & Technology

    2010-07-01

    Preparing for Large Force Exercises with Distributed Simulation: A Panel Presentation Peter Crane, Winston Bennett, Michael France Air Force...used distributed simulation training to complement live-fly exercises to prepare for LFEs. In this panel presentation , the speakers will describe... presentations on how detailed analysis of training needs is necessary to structure simulator scenarios and how future training exercises could be made more

  16. Implementation of Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) Software

    DTIC Science & Technology

    2015-08-01

    Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten and James P Larentzos Approved for...Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten Weapons and Materials Research Directorate, ARL James P Larentzos Engility...Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software 5a. CONTRACT NUMBER 5b

  17. Study of 3-D Dynamic Roughness Effects on Flow Over a NACA 0012 Airfoil Using Large Eddy Simulations at Low Reynolds Numbers

    NASA Astrophysics Data System (ADS)

    Guda, Venkata Subba Sai Satish

    There have been several advancements in the aerospace industry in areas of design such as aerodynamics, designs, controls and propulsion; all aimed at one common goal i.e. increasing efficiency --range and scope of operation with lesser fuel consumption. Several methods of flow control have been tried. Some were successful, some failed and many were termed as impractical. The low Reynolds number regime of 104 - 105 is a very interesting range. Flow physics in this range are quite different than those of higher Reynolds number range. Mid and high altitude UAV's, MAV's, sailplanes, jet engine fan blades, inboard helicopter rotor blades and wind turbine rotors are some of the aerodynamic applications that fall in this range. The current study deals with using dynamic roughness as a means of flow control over a NACA 0012 airfoil at low Reynolds numbers. Dynamic 3-D surface roughness elements on an airfoil placed near the leading edge aim at increasing the efficiency by suppressing the effects of leading edge separation like leading edge stall by delaying or totally eliminating flow separation. A numerical study of the above method has been carried out by means of a Large Eddy Simulation, a mathematical model for turbulence in Computational Fluid Dynamics, owing to the highly unsteady nature of the flow. A user defined function has been developed for the 3-D dynamic roughness element motion. Results from simulations have been compared to those from experimental PIV data. Large eddy simulations have relatively well captured the leading edge stall. For the clean cases, i.e. with the DR not actuated, the LES was able to reproduce experimental results in a reasonable fashion. However DR simulation results show that it fails to reattach the flow and suppress flow separation compared to experiments. Several novel techniques of grid design and hump creation are introduced through this study.

  18. Modeling crystal growth from solution with molecular dynamics simulations: approaches to transition rate constants.

    PubMed

    Reilly, Anthony M; Briesen, Heiko

    2012-01-21

    The feasibility of using the molecular dynamics (MD) simulation technique to study crystal growth from solution quantitatively, as well as to obtain transition rate constants, has been studied. The dynamics of an interface between a solution of Lennard-Jones particles and the (100) face of an fcc lattice comprised of solute particles have been studied using MD simulations, showing that MD is, in principle, capable of following growth behavior over large supersaturation and temperature ranges. Using transition state theory, and a nearest-neighbor approximation growth and dissolution rate constants have been extracted from equilibrium MD simulations at a variety of temperatures. The temperature dependence of the rates agrees well with the expected transition state theory behavior. © 2012 American Institute of Physics

  19. Sharp Interface Algorithm for Large Density Ratio Incompressible Multiphase Magnetohydrodynamic Flows

    DTIC Science & Technology

    2013-01-01

    experiments on liquid metal jets . The FronTier-MHD code has been used for simulations of liquid mercury targets for the proposed muon collider...validated through the comparison with experiments on liquid metal jets . The FronTier-MHD code has been used for simulations of liquid mercury targets...FronTier-MHD code have been performed using experimental and theoretical studies of liquid mercury jets in magnetic fields. Experimental studies of a

  20. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  1. The Strata-1 Regolith Dynamics Experiment: Class 1E Science on ISS

    NASA Technical Reports Server (NTRS)

    Fries, Marc; Graham, Lee; John, Kristen

    2016-01-01

    The Strata-1 experiment studies the evolution of small body regolith through long-duration exposure of simulant materials to the microgravity environment on the International Space Station (ISS). This study will record segregation and mechanical dynamics of regolith simulants in a microgravity and vibration environment similar to that experienced by regolith on small Solar System bodies. Strata-1 will help us understand regolith dynamics and will inform design and procedures for landing and setting anchors, safely sampling and moving material on asteroidal surfaces, processing large volumes of material for in situ resource utilization (ISRU) purposes, and, in general, predicting the behavior of large and small particles on disturbed asteroid surfaces. This experiment is providing new insights into small body surface evolution.

  2. Logistics and quality control for DNA sampling in large multicenter studies.

    PubMed

    Nederhand, R J; Droog, S; Kluft, C; Simoons, M L; de Maat, M P M

    2003-05-01

    To study associations between genetic variation and disease, large bio-banks need to be created in multicenter studies. Therefore, we studied the effects of storage time and temperature on DNA quality and quantity in a simulation experiment with storage up to 28 days frozen, at 4 degrees C and at room temperature. In the simulation experiment, the conditions did not influence the amount or quality of DNA to an unsatisfactory level. However, the amount of extracted DNA was decreased in frozen samples and in samples that were stored for > 7 days at room temperature. In a sample of patients from 24 countries of the EUROPA trial obtained by mail with transport times up to 1 month DNA yield and quality were adequate. From these results we conclude that transport of non-frozen blood by ordinary mail is usable and practical for DNA isolation for polymerase chain reaction in clinical and epidemiological studies.

  3. High mobility of large mass movements: a study by means of FEM/DEM simulations

    NASA Astrophysics Data System (ADS)

    Manzella, I.; Lisjak, A.; Grasselli, G.

    2013-12-01

    Large mass movements, such as rock avalanches and large volcanic debris avalanches are characterized by extremely long propagation, which cannot be modelled using normal sliding friction law. For this reason several studies and theories derived from field observation, physical theories and laboratory experiments, exist to try to explain their high mobility. In order to investigate more into deep some of the processes recalled by these theories, simulations have been run with a new numerical tool called Y-GUI based on the Finite Element-Discrete Element Method FEM/DEM. The FEM/DEM method is a numerical technique developed by Munjiza et al. (1995) where Discrete Element Method (DEM) algorithms are used to model the interaction between different solids, while Finite Element Method (FEM) principles are used to analyze their deformability being also able to explicitly simulate material sudden loss of cohesion (i.e. brittle failure). In particular numerical tests have been run, inspired by the small-scale experiments done by Manzella and Labiouse (2013). They consist of rectangular blocks released on a slope; each block is a rectangular discrete element made of a mesh of finite elements enabled to fragment. These simulations have highlighted the influence on the propagation of block packing, i.e. whether the elements are piled into geometrical ordinate structure before failure or they are chaotically disposed as a loose material, and of the topography, i.e. whether the slope break is smooth and regular or not. In addition the effect of fracturing, i.e. fragmentation, on the total runout have been studied and highlighted.

  4. Large Eddy Simulations of a Bottom Boundary Layer Under a Shallow Geostrophic Front

    NASA Astrophysics Data System (ADS)

    Bateman, S. P.; Simeonov, J.; Calantoni, J.

    2017-12-01

    The unstratified surf zone and the stratified shelf waters are often separated by dynamic fronts that can strongly impact the character of the Ekman bottom boundary layer. Here, we use large eddy simulations to study the turbulent bottom boundary layer associated with a geostrophic current on a stratified shelf of uniform depth. The simulations are initialized with a spatially uniform vertical shear that is in geostrophic balance with a pressure gradient due to a linear horizontal temperature variation. Superposed on the temperature front is a stable vertical temperature gradient. As turbulence develops near the bottom, the turbulence-induced mixing gradually erodes the initial uniform temperature stratification and a well-mixed layer grows in height until the turbulence becomes fully developed. The simulations provide the spatial distribution of the turbulent dissipation and the Reynolds stresses in the fully developed boundary layer. We vary the initial linear stratification and investigate its effect on the height of the bottom boundary layer and the turbulence statistics. The results are compared to previous models and simulations of stratified bottom Ekman layers.

  5. A simulation-based efficiency comparison of AC and DC power distribution networks in commercial buildings

    DOE PAGES

    Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei; ...

    2017-06-12

    Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less

  6. A hybrid parallel framework for the cellular Potts model simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yi; He, Kejing; Dong, Shoubin

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less

  7. A simulation-based efficiency comparison of AC and DC power distribution networks in commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei

    Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less

  8. Evaluation on Asian Dust Aerosol and Simulated Processes in CanAM4.2 Using Satellite Measurements and Station Data

    NASA Astrophysics Data System (ADS)

    Yiran, P.; Li, J.; von Salzen, K.; Dai, T.; Liu, D.

    2014-12-01

    Mineral dust is a significant contributor to global and Asian aerosol burden. Currently, large uncertainties still exist in simulated aerosol processes in global climate models (GCMs), which lead to a diversity in dust mass loading and spatial distribution of GCM projections. In this study, satellite measurements from CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) and observed aerosol data from Asian stations are compared with modelled aerosol in the Canadian Atmospheric Global Climate Model (CanAM4.2). Both seasonal and annual variations in Asian dust distribution are investigated. Vertical profile of simulated aerosol in troposphere is evaluated with CALIOP Level 3 products and local observed extinction for dust and total aerosols. Physical processes in GCM such as horizontal advection, vertical mixing, dry and wet removals are analyzed according to model simulation and available measurements of aerosol. This work aims to improve current understanding of Asian dust transport and vertical exchange on a large scale, which may help to increase the accuracy of GCM simulation on aerosols.

  9. Large-Scale Dynamics of the Magnetospheric Boundary: Comparisons between Global MHD Simulation Results and ISTP Observations

    NASA Technical Reports Server (NTRS)

    Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.

    1998-01-01

    Understanding the large-scale dynamics of the magnetospheric boundary is an important step towards achieving the ISTP mission's broad objective of assessing the global transport of plasma and energy through the geospace environment. Our approach is based on three-dimensional global magnetohydrodynamic (MHD) simulations of the solar wind-magnetosphere- ionosphere system, and consists of using interplanetary magnetic field (IMF) and plasma parameters measured by solar wind monitors upstream of the bow shock as input to the simulations for predicting the large-scale dynamics of the magnetospheric boundary. The validity of these predictions is tested by comparing local data streams with time series measured by downstream spacecraft crossing the magnetospheric boundary. In this paper, we review results from several case studies which confirm that our MHD model reproduces very well the large-scale motion of the magnetospheric boundary. The first case illustrates the complexity of the magnetic field topology that can occur at the dayside magnetospheric boundary for periods of northward IMF with strong Bx and By components. The second comparison reviewed combines dynamic and topological aspects in an investigation of the evolution of the distant tail at 200 R(sub E) from the Earth.

  10. Initial Study of an Effective Fast-Time Simulation Platform for Unmanned Aircraft System Traffic Management

    NASA Technical Reports Server (NTRS)

    Xue, Min; Rios, Joseph

    2017-01-01

    Small Unmanned Aerial Vehicles (sUAVs), typically 55 lbs and below, are envisioned to play a major role in surveilling critical assets, collecting important information, and delivering goods. Large scale small UAV operations are expected to happen in low altitude airspace in the near future. Many static and dynamic constraints exist in low altitude airspace because of manned aircraft or helicopter activities, various wind conditions, restricted airspace, terrain and man-made buildings, and conflict-avoidance among sUAVs. High sensitivity and high maneuverability are unique characteristics of sUAVs that bring challenges to effective system evaluations and mandate such a simulation platform different from existing simulations that were built for manned air traffic system and large unmanned fixed aircraft. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative focuses on enabling safe and efficient sUAV operations in the future. In order to help define requirements and policies for a safe and efficient UTM system to accommodate a large amount of sUAV operations, it is necessary to develop a fast-time simulation platform that can effectively evaluate requirements, policies, and concepts in a close-to-reality environment. This work analyzed the impacts of some key factors including aforementioned sUAV's characteristics and demonstrated the importance of these factors in a successful UTM fast-time simulation platform.

  11. Initial Study of An Effective Fast-Time Simulation Platform for Unmanned Aircraft System Traffic Management

    NASA Technical Reports Server (NTRS)

    Xue, Min; Rios, Joseph

    2017-01-01

    Small Unmanned Aerial Vehicles (sUAVs), typically 55 lbs and below, are envisioned to play a major role in surveilling critical assets, collecting important information, and delivering goods. Large scale small UAV operations are expected to happen in low altitude airspace in the near future. Many static and dynamic constraints exist in low altitude airspace because of manned aircraft or helicopter activities, various wind conditions, restricted airspace, terrain and man-made buildings, and conflict-avoidance among sUAVs. High sensitivity and high maneuverability are unique characteristics of sUAVs that bring challenges to effective system evaluations and mandate such a simulation platform different from existing simulations that were built for manned air traffic system and large unmanned fixed aircraft. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative focuses on enabling safe and efficient sUAV operations in the future. In order to help define requirements and policies for a safe and efficient UTM system to accommodate a large amount of sUAV operations, it is necessary to develop a fast-time simulation platform that can effectively evaluate requirements, policies, and concepts in a close-to-reality environment. This work analyzed the impacts of some key factors including aforementioned sUAV's characteristics and demonstrated the importance of these factors in a successful UTM fast-time simulation platform.

  12. TEMPEST simulations of the plasma transport in a single-null tokamak geometry

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.; Bodi, K.; Cohen, R. H.; Krasheninnikov, S.; Rognlien, T. D.

    2010-06-01

    We present edge kinetic ion transport simulations of tokamak plasmas in magnetic divertor geometry using the fully nonlinear (full-f) continuum code TEMPEST. Besides neoclassical transport, a term for divergence of anomalous kinetic radial flux is added to mock up the effect of turbulent transport. To study the relative roles of neoclassical and anomalous transport, TEMPEST simulations were carried out for plasma transport and flow dynamics in a single-null tokamak geometry, including the pedestal region that extends across the separatrix into the scrape-off layer and private flux region. A series of TEMPEST simulations were conducted to investigate the transition of midplane pedestal heat flux and flow from the neoclassical to the turbulent limit and the transition of divertor heat flux and flow from the kinetic to the fluid regime via an anomalous transport scan and a density scan. The TEMPEST simulation results demonstrate that turbulent transport (as modelled by large diffusion) plays a similar role to collisional decorrelation of particle orbits and that the large turbulent transport (large diffusion) leads to an apparent Maxwellianization of the particle distribution. We also show the transition of parallel heat flux and flow at the entrance to the divertor plates from the fluid to the kinetic regime. For an absorbing divertor plate boundary condition, a non-half-Maxwellian is found due to the balance between upstream radial anomalous transport and energetic ion endloss.

  13. Cross-flow turbines: physical and numerical model studies towards improved array simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2015-12-01

    Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An additional sub-model is considered for injecting turbulence model scalar quantities based on actuator line element loading. Results are presented for the simulation of performance and wake dynamics of axial- and cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET grant 1150797.

  14. An extended algebraic variational multiscale-multigrid-multifractal method (XAVM4) for large-eddy simulation of turbulent two-phase flow

    NASA Astrophysics Data System (ADS)

    Rasthofer, U.; Wall, W. A.; Gravemeier, V.

    2018-04-01

    A novel and comprehensive computational method, referred to as the eXtended Algebraic Variational Multiscale-Multigrid-Multifractal Method (XAVM4), is proposed for large-eddy simulation of the particularly challenging problem of turbulent two-phase flow. The XAVM4 involves multifractal subgrid-scale modeling as well as a Nitsche-type extended finite element method as an approach for two-phase flow. The application of an advanced structural subgrid-scale modeling approach in conjunction with a sharp representation of the discontinuities at the interface between two bulk fluids promise high-fidelity large-eddy simulation of turbulent two-phase flow. The high potential of the XAVM4 is demonstrated for large-eddy simulation of turbulent two-phase bubbly channel flow, that is, turbulent channel flow carrying a single large bubble of the size of the channel half-width in this particular application.

  15. Dose Titration Algorithm Tuning (DTAT) should supersede 'the' Maximum Tolerated Dose (MTD) in oncology dose-finding trials.

    PubMed

    Norris, David C

    2017-01-01

    Background . Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent 'confirmatory' Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of 'the' maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as 'dose-finding', but as dose titration algorithm (DTA) -finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug's population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace 'the' MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents.

  16. THE MARK I BUSINESS SYSTEM SIMULATION MODEL

    DTIC Science & Technology

    of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)

  17. Large eddy simulation of dust-uplift by haboob density currents

    NASA Astrophysics Data System (ADS)

    Huang, Q.

    2017-12-01

    Cold pool outflows have been shown from both observations and convection-permitting models to be a dominant source of dust uplift ("haboobs") in the summertime Sahel and Sahara, and to cause dust uplift over deserts across the world. In this paper large eddy model (LEM) simulations, which resolve the turbulence within the cold-pools much better than previous studies of haboobs which have used convection-permitting models, are used to investigate the winds that cause dust uplift in cold pools, and the resultant dust uplift and transport. Dust uplift largely occurs in the head of the density current, consistent with the few existing observations. In the modeled density current dust is largely restricted to the lowest coldest and well mixed layer of the cold pool outflow (below around 400 m), except above the head of the cold pool where some dust reaches 2.5 km. This rapid transport to high altitude will contribute to long atmospheric lifetimes of large dust particles from haboobs. Decreasing the model horizontal grid-spacing from 1.0 km to 100 m resolves more turbulence, locally increasing winds, increasing mixing and reducing the propagation speed of the density current. Total accumulated dust uplift is approximately twice as large in 1.0 km runs compared with 100 m runs, suggesting that for studying haboobs in convection-permitting runs the representation of turbulence and mixing is significant. Simulations with surface sensible heat fluxes representative of those from a desert region in daytime show that increasing surface fluxes slow the density current due to increased mixing, but increase dust uplift rates, due to increased downward transport of momentum to the surface.

  18. High-resolution Hydrodynamic Simulation of Tidal Detonation of a Helium White Dwarf by an Intermediate Mass Black Hole

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru

    2018-05-01

    We demonstrate tidal detonation during a tidal disruption event (TDE) of a helium (He) white dwarf (WD) with 0.45 M ⊙ by an intermediate mass black hole using extremely high-resolution simulations. Tanikawa et al. have shown tidal detonation in results of previous studies from unphysical heating due to low-resolution simulations, and such unphysical heating occurs in three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations even with 10 million SPH particles. In order to avoid such unphysical heating, we perform 3D SPH simulations up to 300 million SPH particles, and 1D mesh simulations using flow structure in the 3D SPH simulations for 1D initial conditions. The 1D mesh simulations have higher resolutions than the 3D SPH simulations. We show that tidal detonation occurs and confirm that this result is perfectly converged with different space resolution in both 3D SPH and 1D mesh simulations. We find that detonation waves independently arise in leading parts of the WD, and yield large amounts of 56Ni. Although detonation waves are not generated in trailing parts of the WD, the trailing parts would receive detonation waves generated in the leading parts and would leave large amounts of Si group elements. Eventually, this He WD TDE would synthesize 56Ni of 0.30 M ⊙ and Si group elements of 0.08 M ⊙, and could be observed as a luminous thermonuclear transient comparable to SNe Ia.

  19. Impacts of using an ensemble Kalman filter on air quality simulations along the California-Mexico border region during Cal-Mex 2010 field campaign.

    PubMed

    Bei, Naifang; Li, Guohui; Meng, Zhiyong; Weng, Yonghui; Zavala, Miguel; Molina, L T

    2014-11-15

    The purpose of this study is to investigate the impact of using an ensemble Kalman filter (EnKF) on air quality simulations in the California-Mexico border region on two days (May 30 and June 04, 2010) during Cal-Mex 2010. The uncertainties in ozone (O3) and aerosol simulations in the border area due to the meteorological initial uncertainties were examined through ensemble simulations. The ensemble spread of surface O3 averaged over the coastal region was less than 10ppb. The spreads in the nitrate and ammonium aerosols are substantial on both days, mostly caused by the large uncertainties in the surface temperature and humidity simulations. In general, the forecast initialized with the EnKF analysis (EnKF) improved the simulation of meteorological fields to some degree in the border region compared to the reference forecast initialized with NCEP analysis data (FCST) and the simulation with observation nudging (FDDA), which in turn leading to reasonable air quality simulations. The simulated surface O3 distributions by EnKF were consistently better than FCST and FDDA on both days. EnKF usually produced more reasonable simulations of nitrate and ammonium aerosols compared to the observations, but still have difficulties in improving the simulations of organic and sulfate aerosols. However, discrepancies between the EnKF simulations and the measurements were still considerably large, particularly for sulfate and organic aerosols, indicating that there are still ample rooms for improvement in the present data assimilation and/or the modeling systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Communication: Polymer entanglement dynamics: Role of attractive interactions

    DOE PAGES

    Grest, Gary S.

    2016-10-10

    The coupled dynamics of entangled polymers, which span broad time and length scales, govern their unique viscoelastic properties. To follow chain mobility by numerical simulations from the intermediate Rouse and reptation regimes to the late time diffusive regime, highly coarse grained models with purely repulsive interactions between monomers are widely used since they are computationally the most efficient. In this paper, using large scale molecular dynamics simulations, the effect of including the attractive interaction between monomers on the dynamics of entangled polymer melts is explored for the first time over a wide temperature range. Attractive interactions have little effect onmore » the local packing for all temperatures T and on the chain mobility for T higher than about twice the glass transition T g. Finally, these results, across a broad range of molecular weight, show that to study the dynamics of entangled polymer melts, the interactions can be treated as pure repulsive, confirming a posteriori the validity of previous studies and opening the way to new large scale numerical simulations.« less

  1. A wind energy benchmark for ABL modelling of a diurnal cycle with a nocturnal low-level jet: GABLS3 revisited

    DOE PAGES

    Rodrigo, J. Sanz; Churchfield, M.; Kosović, B.

    2016-10-03

    The third GEWEX Atmospheric Boundary Layer Studies (GABLS3) model intercomparison study, around the Cabauw met tower in the Netherlands, is revisited as a benchmark for wind energy atmospheric boundary layer (ABL) models. The case was originally developed by the boundary layer meteorology community, interested in analysing the performance of single-column and large-eddy simulation atmospheric models dealing with a diurnal cycle leading to the development of a nocturnal low-level jet. The case addresses fundamental questions related to the definition of the large-scale forcing, the interaction of the ABL with the surface and the evaluation of model results with observations. The characterizationmore » of mesoscale forcing for asynchronous microscale modelling of the ABL is discussed based on momentum budget analysis of WRF simulations. Then a single-column model is used to demonstrate the added value of incorporating different forcing mechanisms in microscale models. The simulations are evaluated in terms of wind energy quantities of interest.« less

  2. Large-eddy simulation of oxygen transport and depletion in waterbodies

    NASA Astrophysics Data System (ADS)

    Scalo, Carlo; Piomelli, Ugo; Boegman, Leon

    2010-11-01

    Dissolved oxygen (DO) in water plays an important role in lake and marine ecosystems. Agricultural runoff may spur excessive plant growth on the water surface; when the plants die they sink to the bottom of the water bodies and decompose, consuming oxygen. Significant environmental (and economic) damage may result from the loss of aquatic life caused by the oxygen depletion. The study of DO transport and depletion dynamics in water bodies has, therefore, become increasingly important. We study this phenomenon by large-eddy simulations performed at laboratory scale. The equations governing the transport of momentum and of a scalar (the DO) in the fluid are coupled to a biochemical model for DO depletion in the permeable sediment bed [Higashino et al., Water Res. (38) 1, 2004)], and to an equation for the fluid transpiration in the porous medium. The simulations are in good agreement with previous calculations and experiments. We show that the results are sensitive to the biochemical and fluid dynamical properties of the sediment, which are very difficult to determine experimentally.

  3. LES of Temporally Evolving Mixing Layers by an Eighth-Order Filter Scheme

    NASA Technical Reports Server (NTRS)

    Hadjadj, A; Yee, H. C.; Sjogreen, B.

    2011-01-01

    An eighth-order filter method for a wide range of compressible flow speeds (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) are employed for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) and Reynolds numbers. The high order filter method is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The value of Mc considered is for the TML range from the quasi-incompressible regime to the highly compressible supersonic regime. The three main characteristics of compressible TML (the self similarity property, compressibility effects and the presence of large-scale structure with shocklets for high Mc) are considered for the LES study. The LES results using the same scheme parameters for all studied cases agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002).

  4. Accuracy of Range Restriction Correction with Multiple Imputation in Small and Moderate Samples: A Simulation Study

    ERIC Educational Resources Information Center

    Pfaffel, Andreas; Spiel, Christiane

    2016-01-01

    Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…

  5. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    NASA Astrophysics Data System (ADS)

    Wolf-Grosse, Tobias; Esau, Igor; Reuder, Joachim

    2017-06-01

    Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s-1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES) experiments with the Parallelised Large-Eddy Simulation Model (PALM) for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a relatively small local water body acted as a barrier for the horizontal transport of air pollutants from the largest street in the valley and along the valley bottom, transporting them vertically instead and hence diluting them. We found that the stable stratification accumulates the street-level pollution from the transport corridor in shallow air pockets near the surface. The polluted air pockets are transported by the local recirculations to other less polluted areas with only slow dilution. This combination of relatively long distance and complex transport paths together with weak dispersion is not sufficiently resolved in classical air pollution models. The findings have important implications for the air quality predictions over urban areas. Any prediction not resolving these, or similar local dynamic features, might not be able to correctly simulate the dispersion of pollutants in cities.

  6. A study of self organized criticality in ion temperature gradient mode driven gyrokinetic turbulence

    NASA Astrophysics Data System (ADS)

    Mavridis, M.; Isliker, H.; Vlahos, L.; Görler, T.; Jenko, F.; Told, D.

    2014-10-01

    An investigation on the characteristics of self organized criticality (Soc) in ITG mode driven turbulence is made, with the use of various statistical tools (histograms, power spectra, Hurst exponents estimated with the rescaled range analysis, and the structure function method). For this purpose, local non-linear gyrokinetic simulations of the cyclone base case scenario are performed with the GENE software package. Although most authors concentrate on global simulations, which seem to be a better choice for such an investigation, we use local simulations in an attempt to study the locally underlying mechanisms of Soc. We also study the structural properties of radially extended structures, with several tools (fractal dimension estimate, cluster analysis, and two dimensional autocorrelation function), in order to explore whether they can be characterized as avalanches. We find that, for large enough driving temperature gradients, the local simulations exhibit most of the features of Soc, with the exception of the probability distribution of observables, which show a tail, yet they are not of power-law form. The radial structures have the same radial extent at all temperature gradients examined; radial motion (transport) though appears only at large temperature gradients, in which case the radial structures can be interpreted as avalanches.

  7. A study of self organized criticality in ion temperature gradient mode driven gyrokinetic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavridis, M.; Isliker, H.; Vlahos, L.

    2014-10-15

    An investigation on the characteristics of self organized criticality (Soc) in ITG mode driven turbulence is made, with the use of various statistical tools (histograms, power spectra, Hurst exponents estimated with the rescaled range analysis, and the structure function method). For this purpose, local non-linear gyrokinetic simulations of the cyclone base case scenario are performed with the GENE software package. Although most authors concentrate on global simulations, which seem to be a better choice for such an investigation, we use local simulations in an attempt to study the locally underlying mechanisms of Soc. We also study the structural properties ofmore » radially extended structures, with several tools (fractal dimension estimate, cluster analysis, and two dimensional autocorrelation function), in order to explore whether they can be characterized as avalanches. We find that, for large enough driving temperature gradients, the local simulations exhibit most of the features of Soc, with the exception of the probability distribution of observables, which show a tail, yet they are not of power-law form. The radial structures have the same radial extent at all temperature gradients examined; radial motion (transport) though appears only at large temperature gradients, in which case the radial structures can be interpreted as avalanches.« less

  8. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  9. Effects of Internal Waves on Sound Propagation in the Shallow Waters of the Continental Shelves

    DTIC Science & Technology

    2016-09-01

    experiment area were largely generated by tidal forcing. Compared to simulations without internal waves , simulations accounting for the effects of...internal waves in the experiment area were largely generated by tidal forcing. Compared to simulations without internal waves , simulations accounting for...IN THE SHALLOW WATERS OF THE CONTINENTAL SHELVES ..................................4  1.  Internal Tides—Internal Waves Generated by Tidal Forcing

  10. Using Large Signal Code TESLA for Wide Band Klystron Simulations

    DTIC Science & Technology

    2006-04-01

    tuning procedure TESLA simulates of high power klystron [3]. accurately actual eigenmodes of the structure as a solution Wide band klystrons very often...on band klystrons with two-gap two-mode resonators. The decomposition of simulation region into an external results of TESLA simulations for NRL S ...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP022454 TITLE: Using Large Signal Code TESLA for Wide Band Klystron

  11. Multiscale Hy3S: hybrid stochastic simulation for supercomputers.

    PubMed

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-02-24

    Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  12. Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers

    DOE PAGES

    Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...

    2016-01-28

    Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less

  13. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merzari, Elia; Obabko, Aleks; Fischer, Paul

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less

  14. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    DOE PAGES

    Merzari, Elia; Obabko, Aleks; Fischer, Paul; ...

    2016-11-03

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less

  15. Regional Simulations of Stratospheric Lofting of Smoke Plumes

    NASA Astrophysics Data System (ADS)

    Stenchikov, G. L.; Fromm, M.; Robock, A.

    2006-12-01

    The lifetime and spatial distribution of sooty aerosols from multiple fires that would cause major climate impact were debated in studies of climatic and environmental consequences of a nuclear war in the 1980s. The Kuwait oil fires in 1991 did not show a cumulative effect of multiple smoke plumes on large-scale circulation systems and smoke was mainly dispersed in the middle troposphere. However, recent observations show that smoke from large forest fires can be directly injected into the lower stratosphere by strong pyro-convective storms. Smoke plumes in the upper troposphere can be partially mixed into the lower stratosphere because of the same heating and lofting effect that was simulated in large-scale nuclear winter simulations with interactive aerosols. However nuclear winter simulations were conducted using climate models with grid spacing of more than 100 km, which do not account for the fine-scale dynamic processes. Therefore in this study we conduct fine-scale regional simulations of the aerosol plume using the Regional Atmospheric Modeling System (RAMS) mesoscale model which was modified to account for radiatively interactive tracers. To resolve fine-scale dynamic processes we use horizontal grid spacing of 25 km and 60 vertical layers, and initiate simulations with the NCEP reanalysis fields. We find that dense aerosol layers could be lofted from 1 to a few km per day, but this critically depends on the optical depth of aerosol layer, single scatter albedo, and how fast the plume is being diluted. Kuwaiti plumes from different small-area fires reached only 5-6 km altitude and were probably diffused and diluted in the lower and middle troposphere. A plume of 100 km spatial scale initially developed in the upper troposphere tends to penetrate into the stratosphere. Short-term cloud resolving simulations of such a plume show that aerosol heating intensifies small-scale motions that tend to mix smoke polluted air into the lower stratosphere. Regional simulations allow us to more accurately estimate the rate of lifting and spreading of aerosol clouds. But they do not reveal any dynamic processes that could prevent heating and lofting of absorbing aerosols.

  16. Distribution and radiative forcing of Asian dust and anthropogenic aerosols from East Asia simulated by SPRINTARS

    NASA Astrophysics Data System (ADS)

    Takemura, T.; Nakajima, T.; Uno, I.

    2002-12-01

    A three-dimensional aerosol transport-radiation model, SPRINTARS (Spectral Radiation-Transport Model for Aerosol Species), has been developed based on an atmospheric general circulation model of the Center for Climate System Research, University of Tokyo/National Institute for Environmental Studies, Japan to research the effects of aerosols on the climate system and atmospheric environment. SPRINTARS successfully simulates the long-range transport of the large-scale Asian dust storms from East Asia to North America by crossing the North Pacific Ocean in springtime 2001 and 2002. It is found from the calculated dust optical thickness that 10 to 20% of Asian dust around Japan reached North America. The simulation also reveals the importance of anthropogenic aerosols, which are carbonaceous and sulfate aerosols emitted from the industrialized areas in the East Asian continent, to air turbidity during the large-scale Asian dust storms. The simulated results are compared with a volume of observation data regarding the aerosol characteristics over East Asia in the spring of 2001 acquired by the intensive observation campaigns of ACE-Asia (Asian Pacific Regional Aerosol Characterization Experiment) and APEX (Asian Atmospheric Particulate Environmental Change Studies). The comparisons are carried out not only for aerosol concentrations but also for aerosol optical properties, such as optical thickness, Angstrom exponent which is a size index calculated by the log-slope exponent of the optical thickness between two wavelengths, and single scattering albedo. The consistence of Angstrom exponent between the simulation and observations means the reasonable simulation of the ratio of anthropogenic aerosols to Asian dust, which supports the suggestion by the simulation on the importance of anthropogenic aerosols to air turbidity during the large-scale Asian dust storms. SPRINTARS simultaneously calculates the aerosol direct and indirect radiative forcings. The direct radiative forcing of Asian dust at the tropopause is negative over ocean, on the other hand, positive over deserts, snow, and sea ice in the clear-sky condition. The simulation also shows that it depends not only on aerosol mass concentrations but also on the vertical profiles of aerosols and cloud water.

  17. Impact of air-sea drag coefficient for latent heat flux on large scale climate in coupled and atmosphere stand-alone simulations

    NASA Astrophysics Data System (ADS)

    Torres, Olivier; Braconnot, Pascale; Marti, Olivier; Gential, Luc

    2018-05-01

    The turbulent fluxes across the ocean/atmosphere interface represent one of the principal driving forces of the global atmospheric and oceanic circulation. Despite decades of effort and improvements, representation of these fluxes still presents a challenge due to the small-scale acting turbulent processes compared to the resolved scales of the models. Beyond this subgrid parameterization issue, a comprehensive understanding of the impact of air-sea interactions on the climate system is still lacking. In this paper we investigates the large-scale impacts of the transfer coefficient used to compute turbulent heat fluxes with the IPSL-CM4 climate model in which the surface bulk formula is modified. Analyzing both atmosphere and coupled ocean-atmosphere general circulation model (AGCM, OAGCM) simulations allows us to study the direct effect and the mechanisms of adjustment to this modification. We focus on the representation of latent heat flux in the tropics. We show that the heat transfer coefficients are highly similar for a given parameterization between AGCM and OAGCM simulations. Although the same areas are impacted in both kind of simulations, the differences in surface heat fluxes are substantial. A regional modification of heat transfer coefficient has more impact than uniform modification in AGCM simulations while in OAGCM simulations, the opposite is observed. By studying the global energetics and the atmospheric circulation response to the modification, we highlight the role of the ocean in dampening a large part of the disturbance. Modification of the heat exchange coefficient modifies the way the coupled system works due to the link between atmospheric circulation and SST, and the different feedbacks between ocean and atmosphere. The adjustment that takes place implies a balance of net incoming solar radiation that is the same in all simulations. As there is no change in model physics other than drag coefficient, we obtain similar latent heat flux between coupled simulations with different atmospheric circulations. Finally, we analyze the impact of model tuning and show that it can offset part of the feedbacks.

  18. The ATLAS Simulation Infrastructure

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-09-25

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less

  19. Interactions between the Somali Current eddies during the summer monsoon: insights from a numerical study

    NASA Astrophysics Data System (ADS)

    Barnier, B.; Akuetevi, C. Q.; Verron, J. A.; Molines, J. M.; Lecointre, A.

    2016-02-01

    During the summer monsoon, the ocean circulation of the northwestern Indian Ocean is characterized by large anticyclonic circulation features that are part of the Somali Current system. In the vicinity of the equator is the Southern Gyre (SG), a large retroflection loop of the East African Coastal Current, generated after this current (pushed by the southwesterly winds) has crossed the equator. North of it is the Great Whirl (GW), a large anticyclone which exhibits intense swirling currents. Eddy-resolving hindcast simulations of the global ocean circulation are used to study the fast interactions between these large anticyclonic eddies. The present investigation identifies the origin and the subsequent development of the cyclones flanked upon the Great Whirl (GW) previously identified by in satellite observations and establishes that similar cyclones are also flanked upon the Southern Gyre (SG). These cyclones are identified as major actors in mixing water masses within the large eddies and offshore the coast of Somali. All simulations bring to light that during the period when the Southwest Monsoon is well established, the SG moves northward along the Somali coast and encounters the GW. The interaction between the SG and the GW is a collision without merging, collision during which the GW is pushed to the east of Socotra Island, sheds several smaller patches of anticyclonic vorticity, and often reforms into the Socotra Eddy, thus proposing a formation mechanism for the Socotra Eddy. During this process, the GW gives up its place to the SG which in turn becomes a new Great Whirl. This process is robust throughout the three simulations.

  20. HI Fluctuations at Large Redshifts: III - Simulating the Signal Expected at GMRT

    NASA Astrophysics Data System (ADS)

    Bharadwaj, Somnath; Srikant, P. S.

    2004-03-01

    We simulate the distribution of neutral hydrogen (HI) at the redshifts z D 1:3 and 3:4 using a cosmological N-body simulation along with a prescription for assigning HI masses to the particles. The HI is distributed in clouds whose properties are consistent with those of the damped Lyman- absorption systems (DLAs) seen in quasar spectra. The clustering properties of these clouds are identical to those of the dark matter. We use this to simulate the redshifted HI emission expected at 610 MHzand 325 MHz, two of the observing bands at theGMRT. These are used to predict the correlations expected between the complex visibilities measured at different baselines and frequencies in radio-interferometric observations with the GMRT. The visibility correlations directly probe the power spectrum of HI fluctuations at the epoch when the HI emission originated, and this holds the possibility of using HI observations to study large-scale structures at high z.

  1. Global kinetic simulations of neoclassical toroidal viscosity in low-collisional perturbed tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Matsuoka, Seikichi; Idomura, Yasuhiro; Satake, Shinsuke

    2017-10-01

    The neoclassical toroidal viscosity (NTV) caused by a non-axisymmetric magnetic field perturbation is numerically studied using two global kinetic simulations with different numerical approaches. Both simulations reproduce similar collisionality ( νb*) dependencies over wide νb * ranges. It is demonstrated that resonant structures in the velocity space predicted by the conventional superbanana-plateau theory exist in the small banana width limit, while the resonances diminish when the banana width becomes large. It is also found that fine scale structures are generated in the velocity space as νb* decreases in the large banana width simulations, leading to the νb* -dependency of the NTV. From the analyses of the particle orbit, it is found that the finite k∥ mode structure along the bounce motion appears owing to the finite orbit width, and it suffers from bounce phase mixing, suggesting the generation of the fine scale structures by the similar mechanism as the parallel phase mixing of passing particles.

  2. Direct numerical simulation of broadband trailing edge noise from a NACA 0012 airfoil

    NASA Astrophysics Data System (ADS)

    Mehrabadi, Mohammad; Bodony, Daniel

    2016-11-01

    Commercial jet-powered aircraft produce unwanted noise at takeoff and landing when they are close to near-airport communities. Modern high-bypass-ratio turbofan engines have reduced jet exhaust noise sufficiently such that noise from the main fan is now significant. In preparation for a large-eddy simulation of the NASA/GE Source Diagnostic Test Fan, we study the broadband noise due to the turbulent flow on a NACA 0012 airfoil at zero degree angle-of-attack, a chord-based Reynolds number of 408,000 and a Mach number of 0.115 using direct numerical simulation (DNS) and wall-modeled large-eddy simulation (WMLES). The flow conditions correspond to existing experimental data. We investigate the roughness-induced transition-to-turbulence and sound generation from a DNS perspective as well as examine how these two features are captured by a wall model. Comparisons between the DNS- and WMLES-predicted noise are made and provide guidance on the use of WMLES for broadband fan noise prediction. AeroAcoustics Research Consortium.

  3. A new class of finite element variational multiscale turbulence models for incompressible magnetohydrodynamics

    DOE PAGES

    Sondak, D.; Shadid, J. N.; Oberai, A. A.; ...

    2015-04-29

    New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and highmore » Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.« less

  4. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    DOE PAGES

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less

  5. Simulations of cloud-radiation interaction using large-scale forcing derived from the CINDY/DYNAMO northern sounding array

    DOE PAGES

    Wang, Shuguang; Sobel, Adam H.; Fridlind, Ann; ...

    2015-09-25

    The recently completed CINDY/DYNAMO field campaign observed two Madden-Julian oscillation (MJO) events in the equatorial Indian Ocean from October to December 2011. Prior work has indicated that the moist static energy anomalies in these events grew and were sustained to a significant extent by radiative feedbacks. We present here a study of radiative fluxes and clouds in a set of cloud-resolving simulations of these MJO events. The simulations are driven by the large scale forcing dataset derived from the DYNAMO northern sounding array observations, and carried out in a doubly-periodic domain using the Weather Research and Forecasting (WRF) model. simulatedmore » cloud properties and radiative fluxes are compared to those derived from the S-Polka radar and satellite observations. Furthermore, to accommodate the uncertainty in simulated cloud microphysics, a number of single moment (1M) and double moment (2M) microphysical schemes in the WRF model are tested.« less

  6. An Agent-Based Epidemic Simulation of Social Behaviors Affecting HIV Transmission among Taiwanese Homosexuals

    PubMed Central

    2015-01-01

    Computational simulations are currently used to identify epidemic dynamics, to test potential prevention and intervention strategies, and to study the effects of social behaviors on HIV transmission. The author describes an agent-based epidemic simulation model of a network of individuals who participate in high-risk sexual practices, using number of partners, condom usage, and relationship length to distinguish between high- and low-risk populations. Two new concepts—free links and fixed links—are used to indicate tendencies among individuals who either have large numbers of short-term partners or stay in long-term monogamous relationships. An attempt was made to reproduce epidemic curves of reported HIV cases among male homosexuals in Taiwan prior to using the agent-based model to determine the effects of various policies on epidemic dynamics. Results suggest that when suitable adjustments are made based on available social survey statistics, the model accurately simulates real-world behaviors on a large scale. PMID:25815047

  7. Study of unsteady flow simulation of backward impeller with non-uniform casing

    NASA Astrophysics Data System (ADS)

    Swe, War War Min; Morimatsu, Hiroya; Hayashi, Hidechito; Okumura, Tetsuya; Oda, Ippei

    2017-06-01

    The flow characteristics of the centrifugal fans with different blade outlet angles are basically discussed on steady and unsteady simulations for a rectangular casing fan. The blade outlet angles of the impellers are 35° and 25° respectively. The unsteady flow behavior in the passage of the impeller 35° is quite different from that in the steady flow behavior. The large flow separation occurs in the steady flow field and unsteady flow field of the impeller 35°, the flow distribution in the circumferential direction varies remarkably and the flow separation on the blade occurs only at the back region of the fan; but the steady flow behavior in the impeller 25° is almost consistent with the unsteady flow behavior, the flow distribution of the circumferential direction doesn't vary much and the flow separation on the blade hardly occurs. When the circumferential variation of the flow in the impeller is large, the steady flow simulation is not coincident to the unsteady flow simulation.

  8. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  9. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  10. Nitrogen-Related Constraints of Carbon Uptake by Large-Scale Forest Expansion: Simulation Study for Climate Change and Management Scenarios

    NASA Astrophysics Data System (ADS)

    Kracher, Daniela

    2017-11-01

    Increase of forest areas has the potential to increase the terrestrial carbon (C) sink. However, the efficiency for C sequestration depends on the availability of nutrients such as nitrogen (N), which is affected by climatic conditions and management practices. In this study, I analyze how N limitation affects C sequestration of afforestation and how it is influenced by individual climate variables, increased harvest, and fertilizer application. To this end, JSBACH, the land component of the Earth system model of the Max Planck Institute for Meteorology is applied in idealized simulation experiments. In those simulations, large-scale afforestation increases the terrestrial C sink in the 21st century by around 100 Pg C compared to a business as usual land-use scenario. N limitation reduces C sequestration roughly by the same amount. The relevance of compensating effects of uptake and release of carbon dioxide by plant productivity and soil decomposition, respectively, gets obvious from the simulations. N limitation of both fluxes compensates particularly in the tropics. Increased mineralization under global warming triggers forest expansion, which otherwise is restricted by N availability. Due to compensating higher plant productivity and soil respiration, the global net effect of warming for C sequestration is however rather small. Fertilizer application and increased harvest enhance C sequestration as well as boreal expansion. The additional C sequestration achieved by fertilizer application is offset to a large part by additional emissions of nitrous oxide.

  11. Variability of North Atlantic Hurricane Frequency in a Large Ensemble of High-Resolution Climate Simulations

    NASA Astrophysics Data System (ADS)

    Mei, W.; Kamae, Y.; Xie, S. P.

    2017-12-01

    Forced and internal variability of North Atlantic hurricane frequency during 1951-2010 is studied using a large ensemble of climate simulations by a 60-km atmospheric general circulation model that is forced by observed sea surface temperatures (SSTs). The simulations well capture the interannual-to-decadal variability of hurricane frequency in best track data, and further suggest a possible underestimate of hurricane counts in the current best track data prior to 1966 when satellite measurements were unavailable. A genesis potential index (GPI) averaged over the Main Development Region (MDR) accounts for more than 80% of the forced variations in hurricane frequency, with potential intensity and vertical wind shear being the dominant factors. In line with previous studies, the difference between MDR SST and tropical mean SST is a simple but useful predictor; a one-degree increase in this SST difference produces 7.1±1.4 more hurricanes. The hurricane frequency also exhibits internal variability that is comparable in magnitude to the interannual variability. The 100-member ensemble allows us to address the following important questions: (1) Are the observations equivalent to one realization of such a large ensemble? (2) How many ensemble members are needed to reproduce the variability in observations and in the forced component of the simulations? The sources of the internal variability in hurricane frequency will be identified and discussed. The results provide an explanation for the relatively week correlation ( 0.6) between MDR GPI and hurricane frequency on interannual timescales in observations.

  12. Suppressing correlations in massively parallel simulations of lattice models

    NASA Astrophysics Data System (ADS)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  13. The effects of seed dispersal on the simulation of long-term forest landscape change

    Treesearch

    Hong S. He; David J. Mladenoff

    1999-01-01

    The study of forest landscape change requires an understanding of the complex interactions of both spatial and temporal factors. Traditionally, forest gap models have been used to simulate change on small and independent plots. While gap models are useful in examining forest ecological dynamics across temporal scales, large, spatial processes, such as seed dispersal,...

  14. The FireBGCv2 landscape fire and succession model: a research simulation platform for exploring fire and vegetation dynamics

    Treesearch

    Robert E. Keane; Rachel A. Loehman; Lisa M. Holsinger

    2011-01-01

    Fire management faces important emergent issues in the coming years such as climate change, fire exclusion impacts, and wildland-urban development, so new, innovative means are needed to address these challenges. Field studies, while preferable and reliable, will be problematic because of the large time and space scales involved. Therefore, landscape simulation...

  15. Use of cloud radar Doppler spectra to evaluate stratocumulus drizzle size distributions in large-eddy simulations with size-resolved microphysics

    DOE PAGES

    Remillard, J.; Fridlind, Ann M.; Ackerman, A. S.; ...

    2017-09-20

    Here, a case study of persistent stratocumulus over the Azores is simulated using two independent large-eddy simulation (LES) models with bin microphysics, and forward-simulated cloud radar Doppler moments and spectra are compared with observations. Neither model is able to reproduce the monotonic increase of downward mean Doppler velocity with increasing reflectivity that is observed under a variety of conditions, but for differing reasons. To a varying degree, both models also exhibit a tendency to produce too many of the largest droplets, leading to excessive skewness in Doppler velocity distributions, especially below cloud base. Excessive skewness appears to be associated withmore » an insufficiently sharp reduction in droplet number concentration at diameters larger than ~200 μm, where a pronounced shoulder is found for in situ observations and a sharp reduction in reflectivity size distribution is associated with relatively narrow observed Doppler spectra. Effectively using LES with bin microphysics to study drizzle formation and evolution in cloud Doppler radar data evidently requires reducing numerical diffusivity in the treatment of the stochastic collection equation; if that is accomplished sufficiently to reproduce typical spectra, progress toward understanding drizzle processes is likely.« less

  16. Use of cloud radar Doppler spectra to evaluate stratocumulus drizzle size distributions in large-eddy simulations with size-resolved microphysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remillard, J.; Fridlind, Ann M.; Ackerman, A. S.

    Here, a case study of persistent stratocumulus over the Azores is simulated using two independent large-eddy simulation (LES) models with bin microphysics, and forward-simulated cloud radar Doppler moments and spectra are compared with observations. Neither model is able to reproduce the monotonic increase of downward mean Doppler velocity with increasing reflectivity that is observed under a variety of conditions, but for differing reasons. To a varying degree, both models also exhibit a tendency to produce too many of the largest droplets, leading to excessive skewness in Doppler velocity distributions, especially below cloud base. Excessive skewness appears to be associated withmore » an insufficiently sharp reduction in droplet number concentration at diameters larger than ~200 μm, where a pronounced shoulder is found for in situ observations and a sharp reduction in reflectivity size distribution is associated with relatively narrow observed Doppler spectra. Effectively using LES with bin microphysics to study drizzle formation and evolution in cloud Doppler radar data evidently requires reducing numerical diffusivity in the treatment of the stochastic collection equation; if that is accomplished sufficiently to reproduce typical spectra, progress toward understanding drizzle processes is likely.« less

  17. Technical Note: A minimally invasive experimental system for pCO2 manipulation in plankton cultures using passive gas exchange (atmospheric carbon control simulator)

    NASA Astrophysics Data System (ADS)

    Love, Brooke A.; Olson, M. Brady; Wuori, Tristen

    2017-05-01

    As research into the biotic effects of ocean acidification has increased, the methods for simulating these environmental changes in the laboratory have multiplied. Here we describe the atmospheric carbon control simulator (ACCS) for the maintenance of plankton under controlled pCO2 conditions, designed for species sensitive to the physical disturbance introduced by the bubbling of cultures and for studies involving trophic interaction. The system consists of gas mixing and equilibration components coupled with large-volume atmospheric simulation chambers. These chambers allow gas exchange to counteract the changes in carbonate chemistry induced by the metabolic activity of the organisms. The system is relatively low cost, very flexible, and when used in conjunction with semi-continuous culture methods, it increases the density of organisms kept under realistic conditions, increases the allowable time interval between dilutions, and/or decreases the metabolically driven change in carbonate chemistry during these intervals. It accommodates a large number of culture vessels, which facilitate multi-trophic level studies and allow the tracking of variable responses within and across plankton populations to ocean acidification. It also includes components that increase the reliability of gas mixing systems using mass flow controllers.

  18. A Systematic Review of Virtual Reality Simulators for Robot-assisted Surgery.

    PubMed

    Moglia, Andrea; Ferrari, Vincenzo; Morelli, Luca; Ferrari, Mauro; Mosca, Franco; Cuschieri, Alfred

    2016-06-01

    No single large published randomized controlled trial (RCT) has confirmed the efficacy of virtual simulators in the acquisition of skills to the standard required for safe clinical robotic surgery. This remains the main obstacle for the adoption of these virtual simulators in surgical residency curricula. To evaluate the level of evidence in published studies on the efficacy of training on virtual simulators for robotic surgery. In April 2015 a literature search was conducted on PubMed, Web of Science, Scopus, Cochrane Library, the Clinical Trials Database (US) and the Meta Register of Controlled Trials. All publications were scrutinized for relevance to the review and for assessment of the levels of evidence provided using the classification developed by the Oxford Centre for Evidence-Based Medicine. The publications included in the review consisted of one RCT and 28 cohort studies on validity, and seven RCTs and two cohort studies on skills transfer from virtual simulators to robot-assisted surgery. Simulators were rated good for realism (face validity) and for usefulness as a training tool (content validity). However, the studies included used various simulation training methodologies, limiting the assessment of construct validity. The review confirms the absence of any consensus on which tasks and metrics are the most effective for the da Vinci Skills Simulator and dV-Trainer, the most widely investigated systems. Although there is consensus for the RoSS simulator, this is based on only two studies on construct validity involving four exercises. One study on initial evaluation of an augmented reality module for partial nephrectomy using the dV-Trainer reported high correlation (r=0.8) between in vivo porcine nephrectomy and a virtual renorrhaphy task according to the overall Global Evaluation Assessment of Robotic Surgery (GEARS) score. In one RCT on skills transfer, the experimental group outperformed the control group, with a significant difference in overall GEARS score (p=0.012) during performance of urethrovesical anastomosis on an inanimate model. Only one study included assessment of a surgical procedure on real patients: subjects trained on a virtual simulator outperformed the control group following traditional training. However, besides the small numbers, this study was not randomized. There is an urgent need for a large, well-designed, preferably multicenter RCT to study the efficacy of virtual simulation for acquisition competence in and safe execution of clinical robotic-assisted surgery. We reviewed the literature on virtual simulators for robot-assisted surgery. Validity studies used various simulation training methodologies. It is not clear which exercises and metrics are the most effective in distinguishing different levels of experience on the da Vinci robot. There is no reported evidence of skills transfer from simulation to clinical surgery on real patients. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  19. Transient Analysis Generator /TAG/ simulates behavior of large class of electrical networks

    NASA Technical Reports Server (NTRS)

    Thomas, W. J.

    1967-01-01

    Transient Analysis Generator program simulates both transient and dc steady-state behavior of a large class of electrical networks. It generates a special analysis program for each circuit described in an easily understood and manipulated programming language. A generator or preprocessor and a simulation system make up the TAG system.

  20. Mapping Dark Matter in Simulated Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Bowyer, Rachel

    2018-01-01

    Galaxy clusters are the most massive bound objects in the Universe with most of their mass being dark matter. Cosmological simulations of structure formation show that clusters are embedded in a cosmic web of dark matter filaments and large scale structure. It is thought that these filaments are found preferentially close to the long axes of clusters. We extract galaxy clusters from the simulations "cosmo-OWLS" in order to study their properties directly and also to infer their properties from weak gravitational lensing signatures. We investigate various stacking procedures to enhance the signal of the filaments and large scale structure surrounding the clusters to better understand how the filaments of the cosmic web connect with galaxy clusters. This project was supported in part by the NSF REU grant AST-1358980 and by the Nantucket Maria Mitchell Association.

  1. Structure and modeling of turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikov, E.A.

    The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scalemore » motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).« less

  2. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  3. Genomic prediction in animals and plants: simulation of data, validation, reporting, and benchmarking.

    PubMed

    Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M

    2013-02-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.

  4. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    PubMed Central

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650

  5. Modeling stochastic noise in gene regulatory systems

    PubMed Central

    Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung

    2014-01-01

    The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368

  6. Numerical comparisons of ground motion predictions with kinematic rupture modeling

    NASA Astrophysics Data System (ADS)

    Yuan, Y. O.; Zurek, B.; Liu, F.; deMartin, B.; Lacasse, M. D.

    2017-12-01

    Recent advances in large-scale wave simulators allow for the computation of seismograms at unprecedented levels of detail and for areas sufficiently large to be relevant to small regional studies. In some instances, detailed information of the mechanical properties of the subsurface has been obtained from seismic exploration surveys, well data, and core analysis. Using kinematic rupture modeling, this information can be used with a wave propagation simulator to predict the ground motion that would result from an assumed fault rupture. The purpose of this work is to explore the limits of wave propagation simulators for modeling ground motion in different settings, and in particular, to explore the numerical accuracy of different methods in the presence of features that are challenging to simulate such as topography, low-velocity surface layers, and shallow sources. In the main part of this work, we use a variety of synthetic three-dimensional models and compare the relative costs and benefits of different numerical discretization methods in computing the seismograms of realistic-size models. The finite-difference method, the discontinuous-Galerkin method, and the spectral-element method are compared for a range of synthetic models having different levels of complexity such as topography, large subsurface features, low-velocity surface layers, and the location and characteristics of fault ruptures represented as an array of seismic sources. While some previous studies have already demonstrated that unstructured-mesh methods can sometimes tackle complex problems (Moczo et al.), we investigate the trade-off between unstructured-mesh methods and regular-grid methods for a broad range of models and source configurations. Finally, for comparison, our direct simulation results are briefly contrasted with those predicted by a few phenomenological ground-motion prediction equations, and a workflow for accurately predicting ground motion is proposed.

  7. Large Eddy Simulation of wind turbine wakes: detailed comparisons of two codes focusing on effects of numerics and subgrid modeling

    NASA Astrophysics Data System (ADS)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-01

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  8. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  9. Simulation and Analysis of the AFLC Bulk Data Network Using Abstract Data Types.

    DTIC Science & Technology

    1981-12-01

    performs. Simulation is more expensive than queueing, but it is often ə the only way to study complex funtional relationships in a large system. Unlike... relationship between through- put, response and cost is shown in Figure 2. At a given cost level, additional throughput can be obtained at the expense...improved by adding resources, but this increases the total cost of the system. Network models are used to study the relationship between cost

  10. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100

    2015-01-15

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less

  11. Steps Towards Understanding Large-scale Deformation of Gas Hydrate-bearing Sediments

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Deusner, C.; Haeckel, M.; Kossel, E.

    2016-12-01

    Marine sediments bearing gas hydrates are typically characterized by heterogeneity in the gas hydrate distribution and anisotropy in the sediment-gas hydrate fabric properties. Gas hydrates also contribute to the strength and stiffness of the marine sediment, and any disturbance in the thermodynamic stability of the gas hydrates is likely to affect the geomechanical stability of the sediment. Understanding mechanisms and triggers of large-strain deformation and failure of marine gas hydrate-bearing sediments is an area of extensive research, particularly in the context of marine slope-stability and industrial gas production. The ultimate objective is to predict severe deformation events such as regional-scale slope failure or excessive sand production by using numerical simulation tools. The development of such tools essentially requires a careful analysis of thermo-hydro-chemo-mechanical behavior of gas hydrate-bearing sediments at lab-scale, and its stepwise integration into reservoir-scale simulators through definition of effective variables, use of suitable constitutive relations, and application of scaling laws. One of the focus areas of our research is to understand the bulk coupled behavior of marine gas hydrate systems with contributions from micro-scale characteristics, transport-reaction dynamics, and structural heterogeneity through experimental flow-through studies using high-pressure triaxial test systems and advanced tomographical tools (CT, ERT, MRI). We combine these studies to develop mathematical model and numerical simulation tools which could be used to predict the coupled hydro-geomechanical behavior of marine gas hydrate reservoirs in a large-strain framework. Here we will present some of our recent results from closely co-ordinated experimental and numerical simulation studies with an objective to capture the large-deformation behavior relevant to different gas production scenarios. We will also report on a variety of mechanically relevant test scenarios focusing on effects of dynamic changes in gas hydrate saturation, highly uneven gas hydrate distributions, focused fluid migration and gas hydrate production through depressurization and CO2 injection.

  12. High Order Numerical Simulation of Waves Using Regular Grids and Non-conforming Interfaces

    DTIC Science & Technology

    2013-10-06

    SECURITY CLASSIFICATION OF: We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material...of space with smooth, but not necessarily constant, material characteristics, separated into sub-domains by interfaces of arbitrary shape. We...Abstract We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material characteristics, separated into

  13. Numerical study of cold filling and tube deformation in the molten salt receiver

    NASA Astrophysics Data System (ADS)

    Xu, Tingting; Zhang, Gongchen; Peniguel, Christophe; Liao, Zhirong; Li, Xin; Lu, Jiahui; Wang, Zhifeng

    2017-06-01

    Molten salt tube cold filling is one way to accelerate the startup of molten salt Concentrated Solar Power (CSP) plant. This practical operation may induce salt solidification and large thermal stress due to tube's large temperature difference. This paper presents the cold filling study and the induced thermal stress quantitatively through simulation approaches. Physical mechanisms and safe working criteria are identified under certain conditions.

  14. Numerical Study Comparing RANS and LES Approaches on a Circulation Control Airfoil

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Nishino, Takafumi

    2011-01-01

    A numerical study over a nominally two-dimensional circulation control airfoil is performed using a large-eddy simulation code and two Reynolds-averaged Navier-Stokes codes. Different Coanda jet blowing conditions are investigated. In addition to investigating the influence of grid density, a comparison is made between incompressible and compressible flow solvers. The incompressible equations are found to yield negligible differences from the compressible equations up to at least a jet exit Mach number of 0.64. The effects of different turbulence models are also studied. Models that do not account for streamline curvature effects tend to predict jet separation from the Coanda surface too late, and can produce non-physical solutions at high blowing rates. Three different turbulence models that account for streamline curvature are compared with each other and with large eddy simulation solutions. All three models are found to predict the Coanda jet separation location reasonably well, but one of the models predicts specific flow field details near the Coanda surface prior to separation much better than the other two. All Reynolds-averaged Navier-Stokes computations produce higher circulation than large eddy simulation computations, with different stagnation point location and greater flow acceleration around the nose onto the upper surface. The precise reasons for the higher circulation are not clear, although it is not solely a function of predicting the jet separation location correctly.

  15. Formation et évolution des Galaxies : le rôle de leur environnement

    NASA Astrophysics Data System (ADS)

    Boselli, Alessandro

    2016-08-01

    The new panoramic detectors on large telescopes as well as the most performing space missions allowed us to complete large surveys of the Universe at different wavelengths and thus study the relationships between the different galaxy components at various epochs. At the same time, the increasing computing power allowed us to simulate the evolution of galaxies since their formation at an angular resolution never reached so far. In this article I will briefly describe how the comparison between the most recent observations and the predictions of models and simulations changed our view on the process of galaxy formation and evolution.

  16. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  17. Measuring discharge with ADCPs: Inferences from synthetic velocity profiles

    USGS Publications Warehouse

    Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.

    2009-01-01

    Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.

  18. Technology-enhanced simulation and pediatric education: a meta-analysis.

    PubMed

    Cheng, Adam; Lang, Tara R; Starr, Stephanie R; Pusic, Martin; Cook, David A

    2014-05-01

    Pediatrics has embraced technology-enhanced simulation (TES) as an educational modality, but its effectiveness for pediatric education remains unclear. The objective of this study was to describe the characteristics and evaluate the effectiveness of TES for pediatric education. This review adhered to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) standards. A systematic search of Medline, Embase, CINAHL, ERIC, Web of Science, Scopus, key journals, and previous review bibliographies through May 2011 and an updated Medline search through October 2013 were conducted. Original research articles in any language evaluating the use of TES for educating health care providers at any stage, where the content solely focuses on patients 18 years or younger, were selected. Reviewers working in duplicate abstracted information on learners, clinical topic, instructional design, study quality, and outcomes. We coded skills (simulated setting) separately for time and nontime measures and similarly classified patient care behaviors and patient effects. We identified 57 studies (3666 learners) using TES to teach pediatrics. Effect sizes (ESs) were pooled by using a random-effects model. Among studies comparing TES with no intervention, pooled ESs were large for outcomes of knowledge, nontime skills (eg, performance in simulated setting), behaviors with patients, and time to task completion (ES = 0.80-1.91). Studies comparing the use of high versus low physical realism simulators showed small to moderate effects favoring high physical realism (ES = 0.31-0.70). TES for pediatric education is associated with large ESs in comparison with no intervention. Future research should include comparative studies that identify optimal instructional methods and incorporate pediatric-specific issues into educational interventions. Copyright © 2014 by the American Academy of Pediatrics.

  19. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  20. Predicting supramolecular self-assembly on reconstructed metal surfaces

    NASA Astrophysics Data System (ADS)

    Roussel, Thomas J.; Barrena, Esther; Ocal, Carmen; Faraudo, Jordi

    2014-06-01

    The prediction of supramolecular self-assembly onto solid surfaces is still challenging in many situations of interest for nanoscience. In particular, no previous simulation approach has been capable to simulate large self-assembly patterns of organic molecules over reconstructed surfaces (which have periodicities over large distances) due to the large number of surface atoms and adsorbing molecules involved. Using a novel simulation technique, we report here large scale simulations of the self-assembly patterns of an organic molecule (DIP) over different reconstructions of the Au(111) surface. We show that on particular reconstructions, the molecule-molecule interactions are enhanced in a way that long-range order is promoted. Also, the presence of a distortion in a reconstructed surface pattern not only induces the presence of long-range order but also is able to drive the organization of DIP into two coexisting homochiral domains, in quantitative agreement with STM experiments. On the other hand, only short range order is obtained in other reconstructions of the Au(111) surface. The simulation strategy opens interesting perspectives to tune the supramolecular structure by simulation design and surface engineering if choosing the right molecular building blocks and stabilising the chosen reconstruction pattern.The prediction of supramolecular self-assembly onto solid surfaces is still challenging in many situations of interest for nanoscience. In particular, no previous simulation approach has been capable to simulate large self-assembly patterns of organic molecules over reconstructed surfaces (which have periodicities over large distances) due to the large number of surface atoms and adsorbing molecules involved. Using a novel simulation technique, we report here large scale simulations of the self-assembly patterns of an organic molecule (DIP) over different reconstructions of the Au(111) surface. We show that on particular reconstructions, the molecule-molecule interactions are enhanced in a way that long-range order is promoted. Also, the presence of a distortion in a reconstructed surface pattern not only induces the presence of long-range order but also is able to drive the organization of DIP into two coexisting homochiral domains, in quantitative agreement with STM experiments. On the other hand, only short range order is obtained in other reconstructions of the Au(111) surface. The simulation strategy opens interesting perspectives to tune the supramolecular structure by simulation design and surface engineering if choosing the right molecular building blocks and stabilising the chosen reconstruction pattern. GA image adapted from refs: (a) Phys. Chem. Chem. Phys., 2001, 3, 3399-3404, with permission from the PCCP Owner Societies, and (b) J. Phys. Chem. C, 2008, 112 (18), 7168-7172, reprinted with permission from the American Chemical Society, copyright © 2008.

  1. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    NASA Astrophysics Data System (ADS)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of neutrally stratified atmospheric boundary layers over heterogeneous terrain". Water Resources Research, 2006, 42, WO1409 (18 p). [4] J. Keissl, M. Parlange, C. Meneveau. "Field experimental study of dynamic Smagorinsky models in the atmospheric surface layer". Journal of the Atmospheric Science, 2004, 61, 2296-2307. [5] E. Bou-Zeid, N. Vercauteren, M.B. Parlange, C. Meneveau. "Scale dependence of subgrid-scale model coefficients: An a priori study". Physics of Fluids, 2008, 20, 115106. [6] G. Kirkil, J. Mirocha, E. Bou-Zeid, F.K. Chow, B. Kosovic, "Implementation and evaluation of dynamic subfilter - scale stress models for large - eddy simulation using WRF". Monthly Weather Review, 2012, 140, 266-284. [7] S. Radhakrishnan, U. Piomelli. "Large-eddy simulation of oscillating boundary layers: model comparison and validation". Journal of Geophysical Research, 2008, 113, C02022. [8] G. Usera, A. Vernet, J.A. Ferré. "A parallel block-structured finite volume method for flows in complex geometry with sliding interfaces". Flow, Turbulence and Combustion, 2008, 81, 471-495. [9] Y-T. Wu, F. Porté-Agel. "Large-eddy simulation of wind-turbine wakes: evaluation of turbine parametrisations". BoundaryLayerMeteorology, 2011, 138, 345-366.

  2. Self-similarity and flow characteristics of vertical-axis wind turbine wakes: an LES study

    NASA Astrophysics Data System (ADS)

    Abkar, Mahdi; Dabiri, John O.

    2017-04-01

    Large eddy simulation (LES) is coupled with a turbine model to study the structure of the wake behind a vertical-axis wind turbine (VAWT). In the simulations, a tuning-free anisotropic minimum dissipation model is used to parameterise the subfilter stress tensor, while the turbine-induced forces are modelled with an actuator line technique. The LES framework is first validated in the simulation of the wake behind a model straight-bladed VAWT placed in the water channel and then used to study the wake structure downwind of a full-scale VAWT sited in the atmospheric boundary layer. In particular, the self-similarity of the wake is examined, and it is found that the wake velocity deficit can be well characterised by a two-dimensional multivariate Gaussian distribution. By assuming a self-similar Gaussian distribution of the velocity deficit, and applying mass and momentum conservation, an analytical model is developed and tested to predict the maximum velocity deficit downwind of the turbine. Also, a simple parameterisation of VAWTs for LES with very coarse grid resolutions is proposed, in which the turbine is modelled as a rectangular porous plate with the same thrust coefficient. The simulation results show that, after some downwind distance (x/D ≈ 6), both actuator line and rectangular porous plate models have similar predictions for the mean velocity deficit. These results are of particular importance in simulations of large wind farms where, due to the coarse spatial resolution, the flow around individual VAWTs is not resolved.

  3. Towards data warehousing and mining of protein unfolding simulation data.

    PubMed

    Berrar, Daniel; Stahl, Frederic; Silva, Candida; Rodrigues, J Rui; Brito, Rui M M; Dubitzky, Werner

    2005-10-01

    The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.

  4. A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment

    NASA Technical Reports Server (NTRS)

    Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.; hide

    2013-01-01

    Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.

  5. Numerical simulations of Hurricane Katrina (2005) in the turbulent gray zone

    NASA Astrophysics Data System (ADS)

    Green, Benjamin W.; Zhang, Fuqing

    2015-03-01

    Current numerical simulations of tropical cyclones (TCs) use a horizontal grid spacing as small as Δx = 103 m, with all boundary layer (BL) turbulence parameterized. Eventually, TC simulations can be conducted at Large Eddy Simulation (LES) resolution, which requires Δx to fall in the inertial subrange (often <102 m) to adequately resolve the large, energy-containing eddies. Between the two lies the so-called "terra incognita" because some of the assumptions used by mesoscale models and LES to treat BL turbulence are invalid. This study performs several 4-6 h simulations of Hurricane Katrina (2005) without a BL parameterization at extremely fine Δx [333, 200, and 111 m, hereafter "Large Eddy Permitting (LEP) runs"] and compares with mesoscale simulations with BL parameterizations (Δx = 3 km, 1 km, and 333 m, hereafter "PBL runs"). There are profound differences in the hurricane BL structure between the PBL and LEP runs: the former have a deeper inflow layer and secondary eyewall formation, whereas the latter have a shallow inflow layer without a secondary eyewall. Among the LEP runs, decreased Δx yields weaker subgrid-scale vertical momentum fluxes, but the sum of subgrid-scale and "grid-scale" fluxes remain similar. There is also evidence that the size of the prevalent BL eddies depends upon Δx, suggesting that convergence to true LES has not yet been reached. Nevertheless, the similarities in the storm-scale BL structure among the LEP runs indicate that the net effect of the BL on the rest of the hurricane may be somewhat independent of Δx.

  6. An Agent-Based Modeling Template for a Cohort of Veterans with Diabetic Retinopathy.

    PubMed

    Day, Theodore Eugene; Ravi, Nathan; Xian, Hong; Brugh, Ann

    2013-01-01

    Agent-based models are valuable for examining systems where large numbers of discrete individuals interact with each other, or with some environment. Diabetic Veterans seeking eye care at a Veterans Administration hospital represent one such cohort. The objective of this study was to develop an agent-based template to be used as a model for a patient with diabetic retinopathy (DR). This template may be replicated arbitrarily many times in order to generate a large cohort which is representative of a real-world population, upon which in-silico experimentation may be conducted. Agent-based template development was performed in java-based computer simulation suite AnyLogic Professional 6.6. The model was informed by medical data abstracted from 535 patient records representing a retrospective cohort of current patients of the VA St. Louis Healthcare System Eye clinic. Logistic regression was performed to determine the predictors associated with advancing stages of DR. Predicted probabilities obtained from logistic regression were used to generate the stage of DR in the simulated cohort. The simulated cohort of DR patients exhibited no significant deviation from the test population of real-world patients in proportion of stage of DR, duration of diabetes mellitus (DM), or the other abstracted predictors. Simulated patients after 10 years were significantly more likely to exhibit proliferative DR (P<0.001). Agent-based modeling is an emerging platform, capable of simulating large cohorts of individuals based on manageable data abstraction efforts. The modeling method described may be useful in simulating many different conditions where course of disease is described in categorical stages.

  7. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nagata, K.

    2016-08-01

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.

  8. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less

  9. Evaluation of Subgrid-Scale Models for Large Eddy Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Blaisdell, Gregory A.

    1996-01-01

    The objective of this project was to evaluate and develop subgrid-scale (SGS) turbulence models for large eddy simulations (LES) of compressible flows. During the first phase of the project results from LES using the dynamic SGS model were compared to those of direct numerical simulations (DNS) of compressible homogeneous turbulence. The second phase of the project involved implementing the dynamic SGS model in a NASA code for simulating supersonic flow over a flat-plate. The model has been successfully coded and a series of simulations has been completed. One of the major findings of the work is that numerical errors associated with the finite differencing scheme used in the code can overwhelm the SGS model and adversely affect the LES results. Attached to this overview are three submitted papers: 'Evaluation of the Dynamic Model for Simulations of Compressible Decaying Isotropic Turbulence'; 'The effect of the formulation of nonlinear terms on aliasing errors in spectral methods'; and 'Large-Eddy Simulation of a Spatially Evolving Compressible Boundary Layer Flow'.

  10. A Multi-agent Simulation Tool for Micro-scale Contagion Spread Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Daniel B

    2016-01-01

    Within the disaster preparedness and emergency response community, there is interest in how contagions spread person-to-person at large gatherings and if mitigation strategies can be employed to reduce new infections. A contagion spread simulation module was developed for the Incident Management Preparedness and Coordination Toolkit that allows a user to see how a geographically accurate layout of the gathering space helps or hinders the spread of a contagion. The results can inform mitigation strategies based on changing the physical layout of an event space. A case study was conducted for a particular event to calibrate the underlying simulation model. Thismore » paper presents implementation details of the simulation code that incorporates agent movement and disease propagation. Elements of the case study are presented to show how the tool can be used.« less

  11. A fully coupled method for massively parallel simulation of hydraulically driven fractures in 3-dimensions: FULLY COUPLED PARALLEL SIMULATION OF HYDRAULIC FRACTURES IN 3-D

    DOE PAGES

    Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...

    2016-09-18

    This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.

  12. Determination of the spectral behaviour of atmospheric soot using different particle models

    NASA Astrophysics Data System (ADS)

    Skorupski, Krzysztof

    2017-08-01

    In the atmosphere, black carbon aggregates interact with both organic and inorganic matter. In many studies they are modeled using different, less complex, geometries. However, some common simplification might lead to many inaccuracies in the following light scattering simulations. The goal of this study was to compare the spectral behavior of different, commonly used soot particle models. For light scattering simulations, in the visible spectrum, the ADDA algorithm was used. The results prove that the relative extinction error δCext, in some cases, can be unexpectedly large. Therefore, before starting excessive simulations, it is important to know what error might occur.

  13. A fully coupled method for massively parallel simulation of hydraulically driven fractures in 3-dimensions: FULLY COUPLED PARALLEL SIMULATION OF HYDRAULIC FRACTURES IN 3-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.

    This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.

  14. Methodological Considerations in Estimation of Phenotype Heritability Using Genome-Wide SNP Data, Illustrated by an Analysis of the Heritability of Height in a Large Sample of African Ancestry Adults

    PubMed Central

    Chen, Fang; He, Jing; Zhang, Jianqi; Chen, Gary K.; Thomas, Venetta; Ambrosone, Christine B.; Bandera, Elisa V.; Berndt, Sonja I.; Bernstein, Leslie; Blot, William J.; Cai, Qiuyin; Carpten, John; Casey, Graham; Chanock, Stephen J.; Cheng, Iona; Chu, Lisa; Deming, Sandra L.; Driver, W. Ryan; Goodman, Phyllis; Hayes, Richard B.; Hennis, Anselm J. M.; Hsing, Ann W.; Hu, Jennifer J.; Ingles, Sue A.; John, Esther M.; Kittles, Rick A.; Kolb, Suzanne; Leske, M. Cristina; Monroe, Kristine R.; Murphy, Adam; Nemesure, Barbara; Neslund-Dudas, Christine; Nyante, Sarah; Ostrander, Elaine A; Press, Michael F.; Rodriguez-Gil, Jorge L.; Rybicki, Ben A.; Schumacher, Fredrick; Stanford, Janet L.; Signorello, Lisa B.; Strom, Sara S.; Stevens, Victoria; Van Den Berg, David; Wang, Zhaoming; Witte, John S.; Wu, Suh-Yuh; Yamamura, Yuko; Zheng, Wei; Ziegler, Regina G.; Stram, Alexander H.; Kolonel, Laurence N.; Marchand, Loïc Le; Henderson, Brian E.; Haiman, Christopher A.; Stram, Daniel O.

    2015-01-01

    Height has an extremely polygenic pattern of inheritance. Genome-wide association studies (GWAS) have revealed hundreds of common variants that are associated with human height at genome-wide levels of significance. However, only a small fraction of phenotypic variation can be explained by the aggregate of these common variants. In a large study of African-American men and women (n = 14,419), we genotyped and analyzed 966,578 autosomal SNPs across the entire genome using a linear mixed model variance components approach implemented in the program GCTA (Yang et al Nat Genet 2010), and estimated an additive heritability of 44.7% (se: 3.7%) for this phenotype in a sample of evidently unrelated individuals. While this estimated value is similar to that given by Yang et al in their analyses, we remain concerned about two related issues: (1) whether in the complete absence of hidden relatedness, variance components methods have adequate power to estimate heritability when a very large number of SNPs are used in the analysis; and (2) whether estimation of heritability may be biased, in real studies, by low levels of residual hidden relatedness. We addressed the first question in a semi-analytic fashion by directly simulating the distribution of the score statistic for a test of zero heritability with and without low levels of relatedness. The second question was addressed by a very careful comparison of the behavior of estimated heritability for both observed (self-reported) height and simulated phenotypes compared to imputation R2 as a function of the number of SNPs used in the analysis. These simulations help to address the important question about whether today's GWAS SNPs will remain useful for imputing causal variants that are discovered using very large sample sizes in future studies of height, or whether the causal variants themselves will need to be genotyped de novo in order to build a prediction model that ultimately captures a large fraction of the variability of height, and by implication other complex phenotypes. Our overall conclusions are that when study sizes are quite large (5,000 or so) the additive heritability estimate for height is not apparently biased upwards using the linear mixed model; however there is evidence in our simulation that a very large number of causal variants (many thousands) each with very small effect on phenotypic variance will need to be discovered to fill the gap between the heritability explained by known versus unknown causal variants. We conclude that today's GWAS data will remain useful in the future for causal variant prediction, but that finding the causal variants that need to be predicted may be extremely laborious. PMID:26125186

  15. Methodological Considerations in Estimation of Phenotype Heritability Using Genome-Wide SNP Data, Illustrated by an Analysis of the Heritability of Height in a Large Sample of African Ancestry Adults.

    PubMed

    Chen, Fang; He, Jing; Zhang, Jianqi; Chen, Gary K; Thomas, Venetta; Ambrosone, Christine B; Bandera, Elisa V; Berndt, Sonja I; Bernstein, Leslie; Blot, William J; Cai, Qiuyin; Carpten, John; Casey, Graham; Chanock, Stephen J; Cheng, Iona; Chu, Lisa; Deming, Sandra L; Driver, W Ryan; Goodman, Phyllis; Hayes, Richard B; Hennis, Anselm J M; Hsing, Ann W; Hu, Jennifer J; Ingles, Sue A; John, Esther M; Kittles, Rick A; Kolb, Suzanne; Leske, M Cristina; Millikan, Robert C; Monroe, Kristine R; Murphy, Adam; Nemesure, Barbara; Neslund-Dudas, Christine; Nyante, Sarah; Ostrander, Elaine A; Press, Michael F; Rodriguez-Gil, Jorge L; Rybicki, Ben A; Schumacher, Fredrick; Stanford, Janet L; Signorello, Lisa B; Strom, Sara S; Stevens, Victoria; Van Den Berg, David; Wang, Zhaoming; Witte, John S; Wu, Suh-Yuh; Yamamura, Yuko; Zheng, Wei; Ziegler, Regina G; Stram, Alexander H; Kolonel, Laurence N; Le Marchand, Loïc; Henderson, Brian E; Haiman, Christopher A; Stram, Daniel O

    2015-01-01

    Height has an extremely polygenic pattern of inheritance. Genome-wide association studies (GWAS) have revealed hundreds of common variants that are associated with human height at genome-wide levels of significance. However, only a small fraction of phenotypic variation can be explained by the aggregate of these common variants. In a large study of African-American men and women (n = 14,419), we genotyped and analyzed 966,578 autosomal SNPs across the entire genome using a linear mixed model variance components approach implemented in the program GCTA (Yang et al Nat Genet 2010), and estimated an additive heritability of 44.7% (se: 3.7%) for this phenotype in a sample of evidently unrelated individuals. While this estimated value is similar to that given by Yang et al in their analyses, we remain concerned about two related issues: (1) whether in the complete absence of hidden relatedness, variance components methods have adequate power to estimate heritability when a very large number of SNPs are used in the analysis; and (2) whether estimation of heritability may be biased, in real studies, by low levels of residual hidden relatedness. We addressed the first question in a semi-analytic fashion by directly simulating the distribution of the score statistic for a test of zero heritability with and without low levels of relatedness. The second question was addressed by a very careful comparison of the behavior of estimated heritability for both observed (self-reported) height and simulated phenotypes compared to imputation R2 as a function of the number of SNPs used in the analysis. These simulations help to address the important question about whether today's GWAS SNPs will remain useful for imputing causal variants that are discovered using very large sample sizes in future studies of height, or whether the causal variants themselves will need to be genotyped de novo in order to build a prediction model that ultimately captures a large fraction of the variability of height, and by implication other complex phenotypes. Our overall conclusions are that when study sizes are quite large (5,000 or so) the additive heritability estimate for height is not apparently biased upwards using the linear mixed model; however there is evidence in our simulation that a very large number of causal variants (many thousands) each with very small effect on phenotypic variance will need to be discovered to fill the gap between the heritability explained by known versus unknown causal variants. We conclude that today's GWAS data will remain useful in the future for causal variant prediction, but that finding the causal variants that need to be predicted may be extremely laborious.

  16. Cherry-picking functionally relevant substates from long md trajectories using a stratified sampling approach.

    PubMed

    Chandramouli, Balasubramanian; Mancini, Giordano

    2016-01-01

    Classical Molecular Dynamics (MD) simulations can provide insights at the nanoscopic scale into protein dynamics. Currently, simulations of large proteins and complexes can be routinely carried out in the ns-μs time regime. Clustering of MD trajectories is often performed to identify selective conformations and to compare simulation and experimental data coming from different sources on closely related systems. However, clustering techniques are usually applied without a careful validation of results and benchmark studies involving the application of different algorithms to MD data often deal with relatively small peptides instead of average or large proteins; finally clustering is often applied as a means to analyze refined data and also as a way to simplify further analysis of trajectories. Herein, we propose a strategy to classify MD data while carefully benchmarking the performance of clustering algorithms and internal validation criteria for such methods. We demonstrate the method on two showcase systems with different features, and compare the classification of trajectories in real and PCA space. We posit that the prototype procedure adopted here could be highly fruitful in clustering large trajectories of multiple systems or that resulting especially from enhanced sampling techniques like replica exchange simulations. Copyright: © 2016 by Fabrizio Serra editore, Pisa · Roma.

  17. Evaluating best educational practices, student satisfaction, and self-confidence in simulation: A descriptive study.

    PubMed

    Zapko, Karen A; Ferranto, Mary Lou Gemma; Blasiman, Rachael; Shelestak, Debra

    2018-01-01

    The National League for Nursing (NLN) has endorsed simulation as a necessary teaching approach to prepare students for the demanding role of professional nursing. Questions arise about the suitability of simulation experiences to educate students. Empirical support for the effect of simulation on patient outcomes is sparse. Most studies on simulation report only anecdotal results rather than data obtained using evaluative tools. The aim of this study was to examine student perception of best educational practices in simulation and to evaluate their satisfaction and self-confidence in simulation. This study was a descriptive study designed to explore students' perceptions of the simulation experience over a two-year period. Using the Jeffries framework, a Simulation Day was designed consisting of serial patient simulations using high and medium fidelity simulators and live patient actors. The setting for the study was a regional campus of a large Midwestern Research 2 university. The convenience sample consisted of 199 participants and included sophomore, junior, and senior nursing students enrolled in the baccalaureate nursing program. The Simulation Days consisted of serial patient simulations using high and medium fidelity simulators and live patient actors. Participants rotated through four scenarios that corresponded to their level in the nursing program. Data was collected in two consecutive years. Participants completed both the Educational Practices Questionnaire (Student Version) and the Student Satisfaction and Self-Confidence in Learning Scale. Results provide strong support for using serial simulation as a learning tool. Students were satisfied with the experience, felt confident in their performance, and felt the simulations were based on sound educational practices and were important for learning. Serial simulations and having students experience simulations more than once in consecutive years is a valuable method of clinical instruction. When conducted well, simulations can lead to increased student satisfaction and self-confidence. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. How far does the CO2 travel beyond a leaky point?

    NASA Astrophysics Data System (ADS)

    Kong, X.; Delshad, M.; Wheeler, M.

    2012-12-01

    Xianhui Kong, Mojdeh Delshad, Mary F. Wheeler The University of Texas at Austin Numerous research studies have been carried out to investigate the long term feasibility of safe storage of large volumes of CO2 in subsurface saline aquifers. The injected CO2 will undergo complex petrophysical and geochemical processes. During these processes, part of CO2 will be trapped while some will remain as a mobile phase, causing a leakage risk. The comprehensive and accurate characterizations of the trapping and leakage mechanisms are critical for accessing the safety of sequestration, and are challenges in this research area. We have studied different leakage scenarios using realistic aquifer properties including heterogeneity and put forward a comprehensive trapping model for CO2 in deep saline aquifer. The reservoir models include several geological layers and caprocks up to the near surface. Leakage scenarios, such as fracture, high permeability pathways, abandoned wells, are studied. In order to accurately model the fractures, very fine grids are needed near the fracture. Considering that the aquifer usually has a large volume and reservoir model needs large number of grid blocks, simulation would be computational expensive. To deal with this challenge, we carried out the simulations using our in-house parallel reservoir simulator. Our study shows the significance of capillary pressure and permeability-porosity variations on CO2 trapping and leakage. The improved understanding on trapping and leakage will provide confidence in future implementation of sequestration projects.

  19. Large eddy simulation of forest canopy flow for wildland fire modeling

    Treesearch

    Eric Mueller; William Mell; Albert Simeoni

    2014-01-01

    Large eddy simulation (LES) based computational fluid dynamics (CFD) simulators have obtained increasing attention in the wildland fire research community, as these tools allow the inclusion of important driving physics. However, due to the complexity of the models, individual aspects must be isolated and tested rigorously to ensure meaningful results. As wind is a...

  20. Projected future vegetation changes for the northwest United States and southwest Canada at a fine spatial resolution using a dynamic global vegetation model.

    USGS Publications Warehouse

    Shafer, Sarah; Bartlein, Patrick J.; Gray, Elizabeth M.; Pelltier, Richard T.

    2015-01-01

    Future climate change may significantly alter the distributions of many plant taxa. The effects of climate change may be particularly large in mountainous regions where climate can vary significantly with elevation. Understanding potential future vegetation changes in these regions requires methods that can resolve vegetation responses to climate change at fine spatial resolutions. We used LPJ, a dynamic global vegetation model, to assess potential future vegetation changes for a large topographically complex area of the northwest United States and southwest Canada (38.0–58.0°N latitude by 136.6–103.0°W longitude). LPJ is a process-based vegetation model that mechanistically simulates the effect of changing climate and atmospheric CO2 concentrations on vegetation. It was developed and has been mostly applied at spatial resolutions of 10-minutes or coarser. In this study, we used LPJ at a 30-second (~1-km) spatial resolution to simulate potential vegetation changes for 2070–2099. LPJ was run using downscaled future climate simulations from five coupled atmosphere-ocean general circulation models (CCSM3, CGCM3.1(T47), GISS-ER, MIROC3.2(medres), UKMO-HadCM3) produced using the A2 greenhouse gases emissions scenario. Under projected future climate and atmospheric CO2 concentrations, the simulated vegetation changes result in the contraction of alpine, shrub-steppe, and xeric shrub vegetation across the study area and the expansion of woodland and forest vegetation. Large areas of maritime cool forest and cold forest are simulated to persist under projected future conditions. The fine spatial-scale vegetation simulations resolve patterns of vegetation change that are not visible at coarser resolutions and these fine-scale patterns are particularly important for understanding potential future vegetation changes in topographically complex areas.

  1. Are Current Atomistic Force Fields Accurate Enough to Study Proteins in Crowded Environments?

    PubMed Central

    Petrov, Drazen; Zagrovic, Bojan

    2014-01-01

    The high concentration of macromolecules in the crowded cellular interior influences different thermodynamic and kinetic properties of proteins, including their structural stabilities, intermolecular binding affinities and enzymatic rates. Moreover, various structural biology methods, such as NMR or different spectroscopies, typically involve samples with relatively high protein concentration. Due to large sampling requirements, however, the accuracy of classical molecular dynamics (MD) simulations in capturing protein behavior at high concentration still remains largely untested. Here, we use explicit-solvent MD simulations and a total of 6.4 µs of simulated time to study wild-type (folded) and oxidatively damaged (unfolded) forms of villin headpiece at 6 mM and 9.2 mM protein concentration. We first perform an exhaustive set of simulations with multiple protein molecules in the simulation box using GROMOS 45a3 and 54a7 force fields together with different types of electrostatics treatment and solution ionic strengths. Surprisingly, the two villin headpiece variants exhibit similar aggregation behavior, despite the fact that their estimated aggregation propensities markedly differ. Importantly, regardless of the simulation protocol applied, wild-type villin headpiece consistently aggregates even under conditions at which it is experimentally known to be soluble. We demonstrate that aggregation is accompanied by a large decrease in the total potential energy, with not only hydrophobic, but also polar residues and backbone contributing substantially. The same effect is directly observed for two other major atomistic force fields (AMBER99SB-ILDN and CHARMM22-CMAP) as well as indirectly shown for additional two (AMBER94, OPLS-AAL), and is possibly due to a general overestimation of the potential energy of protein-protein interactions at the expense of water-water and water-protein interactions. Overall, our results suggest that current MD force fields may distort the picture of protein behavior in biologically relevant crowded environments. PMID:24854339

  2. An analysis of the number of parking bays and checkout counters for a supermarket using SAS simulation studio

    NASA Astrophysics Data System (ADS)

    Kar, Leow Soo

    2014-07-01

    Two important factors that influence customer satisfaction in large supermarkets or hypermarkets are adequate parking facilities and short waiting times at the checkout counters. This paper describes the simulation analysis of a large supermarket to determine the optimal levels of these two factors. SAS Simulation Studio is used to model a large supermarket in a shopping mall with car park facility. In order to make the simulation model more realistic, a number of complexities are introduced into the model. For example, arrival patterns of customers vary with the time of the day (morning, afternoon and evening) and with the day of the week (weekdays or weekends), the transport mode of arriving customers (by car or other means), the mode of payment (cash or credit card), customer shopping pattern (leisurely, normal, exact) or choice of checkout counters (normal or express). In this study, we focus on 2 important components of the simulation model, namely the parking area, the normal and express checkout counters. The parking area is modeled using a Resource Pool block where one resource unit represents one parking bay. A customer arriving by car seizes a unit of the resource from the Pool block (parks car) and only releases it when he exits the system. Cars arriving when the Resource Pool is empty (no more parking bays) leave without entering the system. The normal and express checkouts are represented by Server blocks with appropriate service time distributions. As a case study, a supermarket in a shopping mall with a limited number of parking bays in Bangsar was chosen for this research. Empirical data on arrival patterns, arrival modes, payment modes, shopping patterns, service times of the checkout counters were collected and analyzed to validate the model. Sensitivity analysis was also performed with different simulation scenarios to identify the parameters for the optimal number the parking spaces and checkout counters.

  3. Effect of real-time boundary wind conditions on the air flow and pollutant dispersion in an urban street canyon—Large eddy simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Yun-Wei; Gu, Zhao-Lin; Cheng, Yan; Lee, Shun-Cheng

    2011-07-01

    Air flow and pollutant dispersion characteristics in an urban street canyon are studied under the real-time boundary conditions. A new scheme for realizing real-time boundary conditions in simulations is proposed, to keep the upper boundary wind conditions consistent with the measured time series of wind data. The air flow structure and its evolution under real-time boundary wind conditions are simulated by using this new scheme. The induced effect of time series of ambient wind conditions on the flow structures inside and above the street canyon is investigated. The flow shows an obvious intermittent feature in the street canyon and the flapping of the shear layer forms near the roof layer under real-time wind conditions, resulting in the expansion or compression of the air mass in the canyon. The simulations of pollutant dispersion show that the pollutants inside and above the street canyon are transported by different dispersion mechanisms, relying on the time series of air flow structures. Large scale air movements in the processes of the air mass expansion or compression in the canyon exhibit obvious effects on pollutant dispersion. The simulations of pollutant dispersion also show that the transport of pollutants from the canyon to the upper air flow is dominated by the shear layer turbulence near the roof level and the expansion or compression of the air mass in street canyon under real-time boundary wind conditions. Especially, the expansion of the air mass, which features the large scale air movement of the air mass, makes more contribution to the pollutant dispersion in this study. Comparisons of simulated results under different boundary wind conditions indicate that real-time boundary wind conditions produces better condition for pollutant dispersion than the artificially-designed steady boundary wind conditions.

  4. Projected Future Vegetation Changes for the Northwest United States and Southwest Canada at a Fine Spatial Resolution Using a Dynamic Global Vegetation Model

    PubMed Central

    Shafer, Sarah L.; Bartlein, Patrick J.; Gray, Elizabeth M.; Pelltier, Richard T.

    2015-01-01

    Future climate change may significantly alter the distributions of many plant taxa. The effects of climate change may be particularly large in mountainous regions where climate can vary significantly with elevation. Understanding potential future vegetation changes in these regions requires methods that can resolve vegetation responses to climate change at fine spatial resolutions. We used LPJ, a dynamic global vegetation model, to assess potential future vegetation changes for a large topographically complex area of the northwest United States and southwest Canada (38.0–58.0°N latitude by 136.6–103.0°W longitude). LPJ is a process-based vegetation model that mechanistically simulates the effect of changing climate and atmospheric CO2 concentrations on vegetation. It was developed and has been mostly applied at spatial resolutions of 10-minutes or coarser. In this study, we used LPJ at a 30-second (~1-km) spatial resolution to simulate potential vegetation changes for 2070–2099. LPJ was run using downscaled future climate simulations from five coupled atmosphere-ocean general circulation models (CCSM3, CGCM3.1(T47), GISS-ER, MIROC3.2(medres), UKMO-HadCM3) produced using the A2 greenhouse gases emissions scenario. Under projected future climate and atmospheric CO2 concentrations, the simulated vegetation changes result in the contraction of alpine, shrub-steppe, and xeric shrub vegetation across the study area and the expansion of woodland and forest vegetation. Large areas of maritime cool forest and cold forest are simulated to persist under projected future conditions. The fine spatial-scale vegetation simulations resolve patterns of vegetation change that are not visible at coarser resolutions and these fine-scale patterns are particularly important for understanding potential future vegetation changes in topographically complex areas. PMID:26488750

  5. An Arctic source for the Great Salinity Anomaly - A simulation of the Arctic ice-ocean system for 1955-1975

    NASA Technical Reports Server (NTRS)

    Hakkinen, Sirpa

    1993-01-01

    The paper employs a fully prognostic Arctic ice-ocean model to study the interannual variability of sea ice during the period 1955-1975 and to explain the large variability of the ice extent in the Greenland and Iceland seas during the late 1960s. The model is used to test the contention of Aagaard and Carmack (1989) that the Great Salinity Anomaly (GSA) was a consequence of the anomalously large ice export in 1968. The high-latitude ice-ocean circulation changes due to wind field changes are explored. The ice export event of 1968 was the largest in the simulation, being about twice as large as the average and corresponding to 1600 cu km of excess fresh water. The simulations suggest that, besides the above average ice export to the Greenland Sea, there was also fresh water export to support the larger than average ice cover. The model results show the origin of the GSA to be in the Arctic, and support the view that the Arctic may play an active role in climate change.

  6. Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Samtaney, R.

    2014-01-01

    The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.

  7. Ssalmon - The Solar Simulations For The Atacama Large Millimeter Observatory Network

    NASA Astrophysics Data System (ADS)

    Wedemeyer, Sven; Ssalmon Group

    2016-07-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) provides a new powerful tool for observing the solar chromosphere at high spatial, temporal, and spectral resolution, which will allow for addressing a wide range of scientific topics in solar physics. Numerical simulations of the solar atmosphere and modeling of instrumental effects are valuable tools for constraining, preparing and optimizing future observations with ALMA and for interpreting the results. In order to co-ordinate related activities, the Solar Simulations for the Atacama Large Millimeter Observatory Network (SSALMON) was initiated on September 1st, 2014, in connection with the NA- and EU-led solar ALMA development studies. As of April, 2015, SSALMON has grown to 83 members from 18 countries (plus ESO and ESA). Another important goal of SSALMON is to promote the scientific potential of solar science with ALMA, which has resulted in two major publications so far. During 2015, the SSALMON Expert Teams produced a White Paper with potential science cases for Cycle 4, which will be the first time regular solar observations will be carried out. Registration and more information at http://www.ssalmon.uio.no.

  8. Large Scale Traffic Simulations

    DOT National Transportation Integrated Search

    1997-01-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computation speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated "looping" between t...

  9. Turbulence and pollutant transport in urban street canyons under stable stratification: a large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Li, X.

    2014-12-01

    Thermal stratification of the atmospheric surface layer has strong impact on the land-atmosphere exchange of turbulent, heat, and pollutant fluxes. Few studies have been carried out for the interaction of the weakly to moderately stable stratified atmosphere and the urban canopy. This study performs a large-eddy simulation of a modeled street canyon within a weakly to moderately stable atmosphere boundary layer. To better resolve the smaller eddy size resulted from the stable stratification, a higher spatial and temporal resolution is used. The detailed flow structure and turbulence inside the street canyon are analyzed. The relationship of pollutant dispersion and Richardson number of the atmosphere is investigated. Differences between these characteristics and those under neutral and unstable atmosphere boundary layer are emphasized.

  10. State-space reduction and equivalence class sampling for a molecular self-assembly model.

    PubMed

    Packwood, Daniel M; Han, Patrick; Hitosugi, Taro

    2016-07-01

    Direct simulation of a model with a large state space will generate enormous volumes of data, much of which is not relevant to the questions under study. In this paper, we consider a molecular self-assembly model as a typical example of a large state-space model, and present a method for selectively retrieving 'target information' from this model. This method partitions the state space into equivalence classes, as identified by an appropriate equivalence relation. The set of equivalence classes H, which serves as a reduced state space, contains none of the superfluous information of the original model. After construction and characterization of a Markov chain with state space H, the target information is efficiently retrieved via Markov chain Monte Carlo sampling. This approach represents a new breed of simulation techniques which are highly optimized for studying molecular self-assembly and, moreover, serves as a valuable guideline for analysis of other large state-space models.

  11. Effects of numerical dissipation and unphysical excursions on scalar-mixing estimates in large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Sharan, Nek; Matheou, Georgios; Dimotakis, Paul

    2017-11-01

    Artificial numerical dissipation decreases dispersive oscillations and can play a key role in mitigating unphysical scalar excursions in large eddy simulations (LES). Its influence on scalar mixing can be assessed through the resolved-scale scalar, Z , its probability density function (PDF), variance, spectra, and the budget of the horizontally averaged equation for Z2. LES of incompressible temporally evolving shear flow enabled us to study the influence of numerical dissipation on unphysical scalar excursions and mixing estimates. Flows with different mixing behavior, with both marching and non-marching scalar PDFs, are studied. Scalar fields for each flow are compared for different grid resolutions and numerical scalar-convection term schemes. As expected, increasing numerical dissipation enhances scalar mixing in the development stage of shear flow characterized by organized large-scale pairings with a non-marching PDF, but has little influence in the self-similar stage of flows with marching PDFs. Flow parameters and regimes sensitive to numerical dissipation help identify approaches to mitigate unphysical excursions while minimizing dissipation.

  12. Large-scale parentage inference with SNPs: an efficient algorithm for statistical confidence of parent pair allocations.

    PubMed

    Anderson, Eric C

    2012-11-08

    Advances in genotyping that allow tens of thousands of individuals to be genotyped at a moderate number of single nucleotide polymorphisms (SNPs) permit parentage inference to be pursued on a very large scale. The intergenerational tagging this capacity allows is revolutionizing the management of cultured organisms (cows, salmon, etc.) and is poised to do the same for scientific studies of natural populations. Currently, however, there are no likelihood-based methods of parentage inference which are implemented in a manner that allows them to quickly handle a very large number of potential parents or parent pairs. Here we introduce an efficient likelihood-based method applicable to the specialized case of cultured organisms in which both parents can be reliably sampled. We develop a Markov chain representation for the cumulative number of Mendelian incompatibilities between an offspring and its putative parents and we exploit it to develop a fast algorithm for simulation-based estimates of statistical confidence in SNP-based assignments of offspring to pairs of parents. The method is implemented in the freely available software SNPPIT. We describe the method in detail, then assess its performance in a large simulation study using known allele frequencies at 96 SNPs from ten hatchery salmon populations. The simulations verify that the method is fast and accurate and that 96 well-chosen SNPs can provide sufficient power to identify the correct pair of parents from amongst millions of candidate pairs.

  13. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  14. Comparisons of some large scientific computers

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1981-01-01

    In 1975, the National Aeronautics and Space Administration (NASA) began studies to assess the technical and economic feasibility of developing a computer having sustained computational speed of one billion floating point operations per second and a working memory of at least 240 million words. Such a powerful computer would allow computational aerodynamics to play a major role in aeronautical design and advanced fluid dynamics research. Based on favorable results from these studies, NASA proceeded with developmental plans. The computer was named the Numerical Aerodynamic Simulator (NAS). To help insure that the estimated cost, schedule, and technical scope were realistic, a brief study was made of past large scientific computers. Large discrepancies between inception and operation in scope, cost, or schedule were studied so that they could be minimized with NASA's proposed new compter. The main computers studied were the ILLIAC IV, STAR 100, Parallel Element Processor Ensemble (PEPE), and Shuttle Mission Simulator (SMS) computer. Comparison data on memory and speed were also obtained on the IBM 650, 704, 7090, 360-50, 360-67, 360-91, and 370-195; the CDC 6400, 6600, 7600, CYBER 203, and CYBER 205; CRAY 1; and the Advanced Scientific Computer (ASC). A few lessons learned conclude the report.

  15. Sand waves in environmental flows: Insights gained by coupling large-eddy simulation with morphodynamics

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Khosronejad, Ali

    2016-02-01

    Sand waves arise in subaqueous and Aeolian environments as the result of the complex interaction between turbulent flows and mobile sand beds. They occur across a wide range of spatial scales, evolve at temporal scales much slower than the integral scale of the transporting turbulent flow, dominate river morphodynamics, undermine streambank stability and infrastructure during flooding, and sculpt terrestrial and extraterrestrial landscapes. In this paper, we present the vision for our work over the last ten years, which has sought to develop computational tools capable of simulating the coupled interactions of sand waves with turbulence across the broad range of relevant scales: from small-scale ripples in laboratory flumes to mega-dunes in large rivers. We review the computational advances that have enabled us to simulate the genesis and long-term evolution of arbitrarily large and complex sand dunes in turbulent flows using large-eddy simulation and summarize numerous novel physical insights derived from our simulations. Our findings explain the role of turbulent sweeps in the near-bed region as the primary mechanism for destabilizing the sand bed, show that the seeds of the emergent structure in dune fields lie in the heterogeneity of the turbulence and bed shear stress fluctuations over the initially flatbed, and elucidate how large dunes at equilibrium give rise to energetic coherent structures and modify the spectra of turbulence. We also discuss future challenges and our vision for advancing a data-driven simulation-based engineering science approach for site-specific simulations of river flooding.

  16. WRF nested large-eddy simulations of deep convection during SEAC4RS

    NASA Astrophysics Data System (ADS)

    Heath, Nicholas Kyle

    Deep convection is an important component of atmospheric circulations that affects many aspects of weather and climate. Therefore, improved understanding and realistic simulations of deep convection are critical to both operational and climate forecasts. Large-eddy simulations (LESs) often are used with observations to enhance understanding of convective processes. This study develops and evaluates a nested-LES method using the Weather Research and Forecasting (WRF) model. Our goal is to evaluate the extent to which the WRF nested-LES approach is useful for studying deep convection during a real-world case. The method was applied on 2 September 2013, a day of continental convection having a robust set of ground and airborne data available for evaluation. A three domain mesoscale WRF simulation is run first. Then, the finest mesoscale output (1.35 km grid length) is used to separately drive nested-LES domains with grid lengths of 450 and 150 m. Results reveal that the nested-LES approach reasonably simulates a broad spectrum of observations, from reflectivity distributions to vertical velocity profiles, during the study period. However, reducing the grid spacing does not necessarily improve results for our case, with the 450 m simulation outperforming the 150 m version. We find that simulated updrafts in the 150 m simulation are too narrow to overcome the negative effects of entrainment, thereby generating convection that is weaker than observed. Increasing the sub-grid mixing length in the 150 m simulation leads to deeper, more realistic convection, but comes at the expense of delaying the onset of the convection. Overall, results show that both the 450 m and 150 m simulations are influenced considerably by the choice of sub-grid mixing length used in the LES turbulence closure. Finally, the simulations and observations are used to study the processes forcing strong midlevel cloud-edge downdrafts that were observed on 2 September. Results suggest that these downdrafts are forced by evaporative cooling due to mixing near cloud edge and by vertical perturbation pressure gradient forces acting to restore mass continuity around neighboring updrafts. We conclude that the WRF nested-LES approach provides an effective method for studying deep convection for our real-world case. The method can be used to provide insight into physical processes that are important to understanding observations. The WRF nested-LES approach could be adapted for other case studies in which high-resolution observations are available for validation.

  17. Remote visualization and scale analysis of large turbulence datatsets

    NASA Astrophysics Data System (ADS)

    Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.

    2015-12-01

    Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.

  18. Large-eddy simulations of turbulent flow for grid-to-rod fretting in nuclear reactors

    DOE PAGES

    Bakosi, J.; Christon, M. A.; Lowrie, R. B.; ...

    2013-07-12

    The grid-to-rod fretting (GTRF) problem in pressurized water reactors is a flow-induced vibration problem that results in wear and failure of the fuel rods in nuclear assemblies. In order to understand the fluid dynamics of GTRF and to build an archival database of turbulence statistics for various configurations, implicit large-eddy simulations of time-dependent single-phase turbulent flow have been performed in 3 × 3 and 5 × 5 rod bundles with a single grid spacer. To assess the computational mesh and resolution requirements, a method for quantitative assessment of unstructured meshes with no-slip walls is described. The calculations have been carriedmore » out using Hydra-TH, a thermal-hydraulics code developed at Los Alamos for the Consortium for Advanced Simulation of Light water reactors, a United States Department of Energy Innovation Hub. Hydra-TH uses a second-order implicit incremental projection method to solve the singlephase incompressible Navier-Stokes equations. The simulations explicitly resolve the large scale motions of the turbulent flow field using first principles and rely on a monotonicity-preserving numerical technique to represent the unresolved scales. Each series of simulations for the 3 × 3 and 5 × 5 rod-bundle geometries is an analysis of the flow field statistics combined with a mesh-refinement study and validation with available experimental data. Our primary focus is the time history and statistics of the forces loading the fuel rods. These hydrodynamic forces are believed to be the key player resulting in rod vibration and GTRF wear, one of the leading causes for leaking nuclear fuel which costs power utilities millions of dollars in preventive measures. As a result, we demonstrate that implicit large-eddy simulation of rod-bundle flows is a viable way to calculate the excitation forces for the GTRF problem.« less

  19. Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.

    2017-06-19

    The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less

  20. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator

    PubMed Central

    Wang, Runchun M.; Thakur, Chetan S.; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks. PMID:29692702

Top