Sample records for large volume simulations

  1. Numerical simulation of seismic wave propagation from land-excited large volume air-gun source

    NASA Astrophysics Data System (ADS)

    Cao, W.; Zhang, W.

    2017-12-01

    The land-excited large volume air-gun source can be used to study regional underground structures and to detect temporal velocity changes. The air-gun source is characterized by rich low frequency energy (from bubble oscillation, 2-8Hz) and high repeatability. It can be excited in rivers, reservoirs or man-made pool. Numerical simulation of the seismic wave propagation from the air-gun source helps to understand the energy partitioning and characteristics of the waveform records at stations. However, the effective energy recorded at a distance station is from the process of bubble oscillation, which can not be approximated by a single point source. We propose a method to simulate the seismic wave propagation from the land-excited large volume air-gun source by finite difference method. The process can be divided into three parts: bubble oscillation and source coupling, solid-fluid coupling and the propagation in the solid medium. For the first part, the wavelet of the bubble oscillation can be simulated by bubble model. We use wave injection method combining the bubble wavelet with elastic wave equation to achieve the source coupling. Then, the solid-fluid boundary condition is implemented along the water bottom. And the last part is the seismic wave propagation in the solid medium, which can be readily implemented by the finite difference method. Our method can get accuracy waveform of land-excited large volume air-gun source. Based on the above forward modeling technology, we analysis the effect of the excited P wave and the energy of converted S wave due to different water shapes. We study two land-excited large volume air-gun fields, one is Binchuan in Yunnan, and the other is Hutubi in Xinjiang. The station in Binchuan, Yunnan is located in a large irregular reservoir, the waveform records have a clear S wave. Nevertheless, the station in Hutubi, Xinjiang is located in a small man-made pool, the waveform records have very weak S wave. Better understanding of

  2. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  3. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  4. Simulation of hydrodynamics using large eddy simulation-second-order moment model in circulating fluidized beds

    NASA Astrophysics Data System (ADS)

    Juhui, Chen; Yanjia, Tang; Dan, Li; Pengfei, Xu; Huilin, Lu

    2013-07-01

    Flow behavior of gas and particles is predicted by the large eddy simulation of gas-second order moment of solid model (LES-SOM model) in the simulation of flow behavior in CFB. This study shows that the simulated solid volume fractions along height using a two-dimensional model are in agreement with experiments. The velocity, volume fraction and second-order moments of particles are computed. The second-order moments of clusters are calculated. The solid volume fraction, velocity and second order moments are compared at the three different model constants.

  5. Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, A. N., E-mail: tgtu-kafedra-ese@mail.ru

    2015-09-15

    Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.

  6. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  7. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  8. Melt Electrospinning Writing of Highly Ordered Large Volume Scaffold Architectures.

    PubMed

    Wunner, Felix M; Wille, Marie-Luise; Noonan, Thomas G; Bas, Onur; Dalton, Paul D; De-Juan-Pardo, Elena M; Hutmacher, Dietmar W

    2018-05-01

    The additive manufacturing of highly ordered, micrometer-scale scaffolds is at the forefront of tissue engineering and regenerative medicine research. The fabrication of scaffolds for the regeneration of larger tissue volumes, in particular, remains a major challenge. A technology at the convergence of additive manufacturing and electrospinning-melt electrospinning writing (MEW)-is also limited in thickness/volume due to the accumulation of excess charge from the deposited material repelling and hence, distorting scaffold architectures. The underlying physical principles are studied that constrain MEW of thick, large volume scaffolds. Through computational modeling, numerical values variable working distances are established respectively, which maintain the electrostatic force at a constant level during the printing process. Based on the computational simulations, three voltage profiles are applied to determine the maximum height (exceeding 7 mm) of a highly ordered large volume scaffold. These thick MEW scaffolds have fully interconnected pores and allow cells to migrate and proliferate. To the best of the authors knowledge, this is the first study to report that z-axis adjustment and increasing the voltage during the MEW process allows for the fabrication of high-volume scaffolds with uniform morphologies and fiber diameters. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Strategies for Interactive Visualization of Large Scale Climate Simulations

    NASA Astrophysics Data System (ADS)

    Xie, J.; Chen, C.; Ma, K.; Parvis

    2011-12-01

    single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.

  10. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE Data Explorer

    Ebrahimi, Fatima [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000331095367); Raman, Roger [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000220273271)

    2016-01-01

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet–Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. These results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  11. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE Data Explorer

    Ebrahimi, F. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Raman, R. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-04-01

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet–Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. These results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  12. The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data

    NASA Technical Reports Server (NTRS)

    Hanke, C. R.; Nordwall, D. R.

    1970-01-01

    The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.

  13. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE PAGES

    Ebrahimi, F.; Raman, R.

    2016-03-23

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form amore » narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. Furthermore, these results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.« less

  14. Computer simulation of preflight blood volume reduction as a countermeasure to fluid shifts in space flight

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R.; Charles, J. B.

    1992-01-01

    Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.

  15. Cardiovascular simulator improvement: pressure versus volume loop assessment.

    PubMed

    Fonseca, Jeison; Andrade, Aron; Nicolosi, Denys E C; Biscegli, José F; Leme, Juliana; Legendre, Daniel; Bock, Eduardo; Lucchi, Julio Cesar

    2011-05-01

    This article presents improvement on a physical cardiovascular simulator (PCS) system. Intraventricular pressure versus intraventricular volume (PxV) loop was obtained to evaluate performance of a pulsatile chamber mimicking the human left ventricle. PxV loop shows heart contractility and is normally used to evaluate heart performance. In many heart diseases, the stroke volume decreases because of low heart contractility. This pathological situation must be simulated by the PCS in order to evaluate the assistance provided by a ventricular assist device (VAD). The PCS system is automatically controlled by a computer and is an auxiliary tool for VAD control strategies development. This PCS system is according to a Windkessel model where lumped parameters are used for cardiovascular system analysis. Peripheral resistance, arteries compliance, and fluid inertance are simulated. The simulator has an actuator with a roller screw and brushless direct current motor, and the stroke volume is regulated by the actuator displacement. Internal pressure and volume measurements are monitored to obtain the PxV loop. Left chamber internal pressure is directly obtained by pressure transducer; however, internal volume has been obtained indirectly by using a linear variable differential transformer, which senses the diaphragm displacement. Correlations between the internal volume and diaphragm position are made. LabVIEW integrates these signals and shows the pressure versus internal volume loop. The results that have been obtained from the PCS system show PxV loops at different ventricle elastances, making possible the simulation of pathological situations. A preliminary test with a pulsatile VAD attached to PCS system was made. © 2011, Copyright the Authors. Artificial Organs © 2011, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  16. Marvel-ous Dwarfs: Results from Four Heroically Large Simulated Volumes of Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Munshi, Ferah; Brooks, Alyson; Weisz, Daniel; Bellovary, Jillian; Christensen, Charlotte

    2018-01-01

    We present results from high resolution, fully cosmological simulations of cosmic sheets that contain many dwarf galaxies. Together, they create the largest collection of simulated dwarf galaxies to date, with z=0 stellar masses comparable to the LMC or smaller. In total, we have simulated almost 100 luminous dwarf galaxies, forming a sample of simulated dwarfs which span a wide range of physical (stellar and halo mass) and evolutionary properties (merger history). We show how they can be calibrated against a wealth of observations of nearby galaxies including star formation histories, HI masses and kinematics, as well as stellar metallicities. We present preliminary results answering the following key questions: What is the slope of the stellar mass function at extremely low masses? Do halos with HI and no stars exist? What is the scatter in the stellar to halo mass relationship as a function of dwarf mass? What drives the scatter? With this large suite, we are beginning to statistically characterize dwarf galaxies and identify the types and numbers of outliers to expect.

  17. Large-eddy simulation of propeller noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Mahesh, Krishnan

    2016-11-01

    We will discuss our ongoing work towards developing the capability to predict far field sound from the large-eddy simulation of propellers. A porous surface Ffowcs-Williams and Hawkings (FW-H) acoustic analogy, with a dynamic endcapping method (Nitzkorski and Mahesh, 2014) is developed for unstructured grids in a rotating frame of reference. The FW-H surface is generated automatically using Delaunay triangulation and is representative of the underlying volume mesh. The approach is validated for tonal trailing edge sound from a NACA 0012 airfoil. LES of flow around a propeller at design advance ratio is compared to experiment and good agreement is obtained. Results for the emitted far field sound will be discussed. This work is supported by ONR.

  18. Feasibility of large volume tumor ablation using multiple-mode strategy with fast scanning method: A numerical study

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Shen, Guofeng; Qiao, Shan; Chen, Yazhu

    2017-03-01

    Sonication with fast scanning method can generate homogeneous lesions without complex planning. But when the target region is large, switching focus too fast will reduce the heat accumulation, the margin of which may not ablated. Furthermore, high blood perfusion rate will reduce this maximum volume that can be ablated. Therefore, fast scanning method may not be applied to large volume tumor. To expand the therapy scope, this study combines the fast scan method with multiple mode strategy. Through simulation and experiment, the feasibility of this new strategy is evaluated and analyzed.

  19. Determination of component volumes of lipid bilayers from simulations.

    PubMed Central

    Petrache, H I; Feller, S E; Nagle, J F

    1997-01-01

    An efficient method for extracting volumetric data from simulations is developed. The method is illustrated using a recent atomic-level molecular dynamics simulation of L alpha phase 1,2-dipalmitoyl-sn-glycero-3-phosphocholine bilayer. Results from this simulation are obtained for the volumes of water (VW), lipid (V1), chain methylenes (V2), chain terminal methyls (V3), and lipid headgroups (VH), including separate volumes for carboxyl (Vcoo), glyceryl (Vgl), phosphoryl (VPO4), and choline (Vchol) groups. The method assumes only that each group has the same average volume regardless of its location in the bilayer, and this assumption is then tested with the current simulation. The volumes obtained agree well with the values VW and VL that have been obtained directly from experiment, as well as with the volumes VH, V2, and V3 that require certain assumptions in addition to the experimental data. This method should help to support and refine some assumptions that are necessary when interpreting experimental data. Images FIGURE 4 PMID:9129826

  20. Dry Volume Fracturing Simulation of Shale Gas Reservoir

    NASA Astrophysics Data System (ADS)

    Xu, Guixi; Wang, Shuzhong; Luo, Xiangrong; Jing, Zefeng

    2017-11-01

    Application of CO2 dry fracturing technology to shale gas reservoir development in China has advantages of no water consumption, little reservoir damage and promoting CH4 desorption. This paper uses Meyer simulation to study complex fracture network extension and the distribution characteristics of shale gas reservoirs in the CO2 dry volume fracturing process. The simulation results prove the validity of the modified CO2 dry fracturing fluid used in shale volume fracturing and provides a theoretical basis for the following study on interval optimization of the shale reservoir dry volume fracturing.

  1. Replicable Interprofessional Competency Outcomes from High-Volume, Inter-Institutional, Interprofessional Simulation

    PubMed Central

    Bambini, Deborah; Emery, Matthew; de Voest, Margaret; Meny, Lisa; Shoemaker, Michael J.

    2016-01-01

    There are significant limitations among the few prior studies that have examined the development and implementation of interprofessional education (IPE) experiences to accommodate a high volume of students from several disciplines and from different institutions. The present study addressed these gaps by seeking to determine the extent to which a single, large, inter-institutional, and IPE simulation event improves student perceptions of the importance and relevance of IPE and simulation as a learning modality, whether there is a difference in students’ perceptions among disciplines, and whether the results are reproducible. A total of 290 medical, nursing, pharmacy, and physical therapy students participated in one of two large, inter-institutional, IPE simulation events. Measurements included student perceptions about their simulation experience using the Attitude Towards Teamwork in Training Undergoing Designed Educational Simulation (ATTITUDES) Questionnaire and open-ended questions related to teamwork and communication. Results demonstrated a statistically significant improvement across all ATTITUDES subscales, while time management, role confusion, collaboration, and mutual support emerged as significant themes. Results of the present study indicate that a single IPE simulation event can reproducibly result in significant and educationally meaningful improvements in student perceptions towards teamwork, IPE, and simulation as a learning modality. PMID:28970407

  2. Technologies for imaging neural activity in large volumes

    PubMed Central

    Ji, Na; Freeman, Jeremy; Smith, Spencer L.

    2017-01-01

    Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194

  3. Large-Eddy Simulation of Internal Flow through Human Vocal Folds

    NASA Astrophysics Data System (ADS)

    Lasota, Martin; Šidlof, Petr

    2018-06-01

    The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.

  4. The persistence of the large volumes in black holes

    NASA Astrophysics Data System (ADS)

    Ong, Yen Chin

    2015-08-01

    Classically, black holes admit maximal interior volumes that grow asymptotically linearly in time. We show that such volumes remain large when Hawking evaporation is taken into account. Even if a charged black hole approaches the extremal limit during this evolution, its volume continues to grow; although an exactly extremal black hole does not have a "large interior". We clarify this point and discuss the implications of our results to the information loss and firewall paradoxes.

  5. Large volume continuous counterflow dialyzer has high efficiency

    NASA Technical Reports Server (NTRS)

    Mandeles, S.; Woods, E. C.

    1967-01-01

    Dialyzer separates macromolecules from small molecules in large volumes of solution. It takes advantage of the high area/volume ratio in commercially available 1/4-inch dialysis tubing and maintains a high concentration gradient at the dialyzing surface by counterflow.

  6. Stroke volume variation as a guide for fluid resuscitation in patients undergoing large-volume liposuction.

    PubMed

    Jain, Anil Kumar; Khan, Asma M

    2012-09-01

    : The potential for fluid overload in large-volume liposuction is a source of serious concern. Fluid management in these patients is controversial and governed by various formulas that have been advanced by many authors. Basically, it is the ratio of what goes into the patient and what comes out. Central venous pressure has been used to monitor fluid therapy. Dynamic parameters, such as stroke volume and pulse pressure variation, are better predictors of volume responsiveness and are superior to static indicators, such as central venous pressure and pulmonary capillary wedge pressure. Stroke volume variation was used in this study to guide fluid resuscitation and compared with one guided by an intraoperative fluid ratio of 1.2 (i.e., Rohrich formula). : Stroke volume variation was used as a guide for intraoperative fluid administration in 15 patients subjected to large-volume liposuction. In another 15 patients, fluid resuscitation was guided by an intraoperative fluid ratio of 1.2. The amounts of intravenous fluid administered in the groups were compared. : The mean amount of fluid infused was 561 ± 181 ml in the stroke volume variation group and 2383 ± 1208 ml in the intraoperative fluid ratio group. The intraoperative fluid ratio when calculated for the stroke volume variation group was 0.936 ± 0.084. All patients maintained hemodynamic parameters (heart rate and systolic, diastolic, and mean blood pressure). Renal and metabolic indices remained within normal limits. : Stroke volume variation-guided fluid application could result in an appropriate amount of intravenous fluid use in patients undergoing large-volume liposuction. : Therapeutic, II.

  7. Large discharge-volume, silent discharge spark plug

    DOEpatents

    Kang, Michael

    1995-01-01

    A large discharge-volume spark plug for providing self-limiting microdischarges. The apparatus includes a generally spark plug-shaped arrangement of a pair of electrodes, where either of the two coaxial electrodes is substantially shielded by a dielectric barrier from a direct discharge from the other electrode, the unshielded electrode and the dielectric barrier forming an annular volume in which self-terminating microdischarges occur when alternating high voltage is applied to the center electrode. The large area over which the discharges occur, and the large number of possible discharges within the period of an engine cycle, make the present silent discharge plasma spark plug suitable for use as an ignition source for engines. In the situation, where a single discharge is effective in causing ignition of the combustible gases, a conventional single-polarity, single-pulse, spark plug voltage supply may be used.

  8. Large eddy simulation of soot evolution in an aircraft combustor

    NASA Astrophysics Data System (ADS)

    Mueller, Michael E.; Pitsch, Heinz

    2013-11-01

    An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel

  9. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks.

    PubMed

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-11-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors' best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.

  10. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks

    PubMed Central

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-01-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well. PMID:27809285

  11. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.

  12. Large Volume, Behaviorally-relevant Illumination for Optogenetics in Non-human Primates.

    PubMed

    Acker, Leah C; Pino, Erica N; Boyden, Edward S; Desimone, Robert

    2017-10-03

    This protocol describes a large-volume illuminator, which was developed for optogenetic manipulations in the non-human primate brain. The illuminator is a modified plastic optical fiber with etched tip, such that the light emitting surface area is > 100x that of a conventional fiber. In addition to describing the construction of the large-volume illuminator, this protocol details the quality-control calibration used to ensure even light distribution. Further, this protocol describes techniques for inserting and removing the large volume illuminator. Both superficial and deep structures may be illuminated. This large volume illuminator does not need to be physically coupled to an electrode, and because the illuminator is made of plastic, not glass, it will simply bend in circumstances when traditional optical fibers would shatter. Because this illuminator delivers light over behaviorally-relevant tissue volumes (≈ 10 mm 3 ) with no greater penetration damage than a conventional optical fiber, it facilitates behavioral studies using optogenetics in non-human primates.

  13. Hydrothermal fluid flow and deformation in large calderas: Inferences from numerical simulations

    USGS Publications Warehouse

    Hurwitz, S.; Christiansen, L.B.; Hsieh, P.A.

    2007-01-01

    Inflation and deflation of large calderas is traditionally interpreted as being induced by volume change of a discrete source embedded in an elastic or viscoelastic half-space, though it has also been suggested that hydrothermal fluids may play a role. To test the latter hypothesis, we carry out numerical simulations of hydrothermal fluid flow and poroelastic deformation in calderas by coupling two numerical codes: (1) TOUGH2 [Pruess et al., 1999], which simulates flow in porous or fractured media, and (2) BIOT2 [Hsieh, 1996], which simulates fluid flow and deformation in a linearly elastic porous medium. In the simulations, high-temperature water (350??C) is injected at variable rates into a cylinder (radius 50 km, height 3-5 km). A sensitivity analysis indicates that small differences in the values of permeability and its anisotropy, the depth and rate of hydrothermal injection, and the values of the shear modulus may lead to significant variations in the magnitude, rate, and geometry of ground surface displacement, or uplift. Some of the simulated uplift rates are similar to observed uplift rates in large calderas, suggesting that the injection of aqueous fluids into the shallow crust may explain some of the deformation observed in calderas.

  14. Large volume flow-through scintillating detector

    DOEpatents

    Gritzo, Russ E.; Fowler, Malcolm M.

    1995-01-01

    A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

  15. Optimizing for Large Planar Fractures in Multistage Horizontal Wells in Enhanced Geothermal Systems Using a Coupled Fluid and Geomechanics Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred

    Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less

  16. REXOR 2 rotorcraft simulation model. Volume 1: Engineering documentation

    NASA Technical Reports Server (NTRS)

    Reaser, J. S.; Kretsinger, P. H.

    1978-01-01

    A rotorcraft nonlinear simulation called REXOR II, divided into three volumes, is described. The first volume is a development of rotorcraft mechanics and aerodynamics. The second is a development and explanation of the computer code required to implement the equations of motion. The third volume is a user's manual, and contains a description of code input/output as well as operating instructions.

  17. Large-Eddy Simulation of Wind-Plant Aerodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology formore » performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.« less

  18. Technical design and commissioning of the KATRIN large-volume air coil system

    NASA Astrophysics Data System (ADS)

    Erhard, M.; Behrens, J.; Bauer, S.; Beglarian, A.; Berendes, R.; Drexlin, G.; Glück, F.; Gumbsheimer, R.; Hergenhan, J.; Leiber, B.; Mertens, S.; Osipowicz, A.; Plischke, P.; Reich, J.; Thümmler, T.; Wandkowsky, N.; Weinheimer, C.; Wüstling, S.

    2018-02-01

    The KATRIN experiment is a next-generation direct neutrino mass experiment with a sensitivity of 0.2 eV (90% C.L.) to the effective mass of the electron neutrino. It measures the tritium β-decay spectrum close to its endpoint with a spectrometer based on the MAC-E filter technique. The β-decay electrons are guided by a magnetic field that operates in the mT range in the central spectrometer volume; it is fine-tuned by a large-volume air coil system surrounding the spectrometer vessel. The purpose of the system is to provide optimal transmission properties for signal electrons and to achieve efficient magnetic shielding against background. In this paper we describe the technical design of the air coil system, including its mechanical and electrical properties. We outline the importance of its versatile operation modes in background investigation and suppression techniques. We compare magnetic field measurements in the inner spectrometer volume during system commissioning with corresponding simulations, which allows to verify the system's functionality in fine-tuning the magnetic field configuration. This is of major importance for a successful neutrino mass measurement at KATRIN.

  19. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  20. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  1. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE PAGES

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; ...

    2016-12-07

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that

  2. Evaluating lossy data compression on climate simulation data within a large ensemble

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter

    2016-12-01

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying

  3. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that

  4. GENASIS Mathematics : Object-oriented manifolds, operations, and solvers for large-scale physics simulations

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2018-01-01

    The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.

  5. Large Scale Traffic Simulations

    DOT National Transportation Integrated Search

    1997-01-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computation speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated "looping" between t...

  6. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performingmore » this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.« less

  7. Evaluation of Large Volume SrI2(Eu) Scintillator Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturm, B W; Cherepy, N J; Drury, O B

    2010-11-18

    There is an ever increasing demand for gamma-ray detectors which can achieve good energy resolution, high detection efficiency, and room-temperature operation. We are working to address each of these requirements through the development of large volume SrI{sub 2}(Eu) scintillator detectors. In this work, we have evaluated a variety of SrI{sub 2} crystals with volumes >10 cm{sup 3}. The goal of this research was to examine the causes of energy resolution degradation for larger detectors and to determine what can be done to mitigate these effects. Testing both packaged and unpackaged detectors, we have consistently achieved better resolution with the packagedmore » detectors. Using a collimated gamma-ray source, it was determined that better energy resolution for the packaged detectors is correlated with better light collection uniformity. A number of packaged detectors were fabricated and tested and the best spectroscopic performance was achieved for a 3% Eu doped crystal with an energy resolution of 2.93% FWHM at 662keV. Simulations of SrI{sub 2}(Eu) crystals were also performed to better understand the light transport physics in scintillators and are reported. This study has important implications for the development of SrI{sub 2}(Eu) detectors for national security purposes.« less

  8. Large Eddy Simulation of Air Escape through a Hospital Isolation Room Single Hinged Doorway—Validation by Using Tracer Gases and Simulated Smoke Videos

    PubMed Central

    Saarinen, Pekka E.; Kalliomäki, Petri; Tang, Julian W.; Koskela, Hannu

    2015-01-01

    The use of hospital isolation rooms has increased considerably in recent years due to the worldwide outbreaks of various emerging infectious diseases. However, the passage of staff through isolation room doors is suspected to be a cause of containment failure, especially in case of hinged doors. It is therefore important to minimize inadvertent contaminant airflow leakage across the doorway during such movements. To this end, it is essential to investigate the behavior of such airflows, especially the overall volume of air that can potentially leak across the doorway during door-opening and human passage. Experimental measurements using full-scale mock-ups are expensive and labour intensive. A useful alternative approach is the application of Computational Fluid Dynamics (CFD) modelling using a time-resolved Large Eddy Simulation (LES) method. In this study simulated air flow patterns are qualitatively compared with experimental ones, and the simulated total volume of air that escapes is compared with the experimentally measured volume. It is shown that the LES method is able to reproduce, at room scale, the complex transient airflows generated during door-opening/closing motions and the passage of a human figure through the doorway between two rooms. This was a basic test case that was performed in an isothermal environment without ventilation. However, the advantage of the CFD approach is that the addition of ventilation airflows and a temperature difference between the rooms is, in principle, a relatively simple task. A standard method to observe flow structures is dosing smoke into the flow. In this paper we introduce graphical methods to simulate smoke experiments by LES, making it very easy to compare the CFD simulation to the experiments. The results demonstrate that the transient CFD simulation is a promising tool to compare different isolation room scenarios without the need to construct full-scale experimental models. The CFD model is able to reproduce

  9. Accessibility and Analysis to NASA's New Large Volume Missions

    NASA Astrophysics Data System (ADS)

    Hausman, J.; Gangl, M.; McAuley, J.; Toaz, R., Jr.

    2016-12-01

    Each new satellite mission continues to measure larger volumes of data than the last. This is especially true with the new NASA satellite missions NISAR and SWOT, launching in 2020 and 2021, which will produce petabytes of data a year. A major concern is how will users be able to analyze such volumes? This presentation will show how cloud storage and analysis can help overcome and accommodate multiple users' needs. While users may only need gigabytes of data for their research, the data center will need to leverage the processing power of the cloud to perform search and subsetting capabilities over the large volume of data. There is also a vast array of user types that require different tools and services to access and analyze the data. Some users need global data to run climate models, while others require small, dynamic regions with lots of analysis and transformations. There will also be a need to generate data that have different inputs or correction algorithms that the project may not be able to provide as those will be very specialized for specific regions or evolve quicker than what the project can reprocess. By having the data and tools side by side, users will be able to access the data they require and analyze it all in one place. By placing data in the cloud, users can analyze the data there, shifting the current "download and analyze" paradigm to "log-in and analyze". The cloud will provide adequate processing power needed to analyze large volumes of data, subset small regions over large volumes of data, and regenerate/reformat data to the specificity each user requires.

  10. Scalar excursions in large-eddy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matheou, Georgios; Dimotakis, Paul E.

    Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid

  11. Scalar excursions in large-eddy simulations

    DOE PAGES

    Matheou, Georgios; Dimotakis, Paul E.

    2016-08-31

    Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid

  12. Spatial considerations during cryopreservation of a large volume sample.

    PubMed

    Kilbride, Peter; Lamb, Stephen; Milne, Stuart; Gibbons, Stephanie; Erro, Eloy; Bundy, James; Selden, Clare; Fuller, Barry; Morris, John

    2016-08-01

    There have been relatively few studies on the implications of the physical conditions experienced by cells during large volume (litres) cryopreservation - most studies have focused on the problem of cryopreservation of smaller volumes, typically up to 2 ml. This study explores the effects of ice growth by progressive solidification, generally seen during larger scale cryopreservation, on encapsulated liver hepatocyte spheroids, and it develops a method to reliably sample different regions across the frozen cores of samples experiencing progressive solidification. These issues are examined in the context of a Bioartificial Liver Device which requires cryopreservation of a 2 L volume in a strict cylindrical geometry for optimal clinical delivery. Progressive solidification cannot be avoided in this arrangement. In such a system optimal cryoprotectant concentrations and cooling rates are known. However, applying these parameters to a large volume is challenging due to the thermal mass and subsequent thermal lag. The specific impact of this to the cryopreservation outcome is required. Under conditions of progressive solidification, the spatial location of Encapsulated Liver Spheroids had a strong impact on post-thaw recovery. Cells in areas first and last to solidify demonstrated significantly impaired post-thaw function, whereas areas solidifying through the majority of the process exhibited higher post-thaw outcome. It was also found that samples where the ice thawed more rapidly had greater post-thaw viability 24 h post-thaw (75.7 ± 3.9% and 62.0 ± 7.2% respectively). These findings have implications for the cryopreservation of large volumes with a rigid shape and for the cryopreservation of a Bioartificial Liver Device. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Large Eddy Simulations using oodlesDST

    DTIC Science & Technology

    2016-01-01

    Research Agency DST-Group-TR-3205 ABSTRACT The oodlesDST code is based on OpenFOAM software and performs Large Eddy Simulations of......maritime platforms using a variety of simulation techniques. He is currently using OpenFOAM software to perform both Reynolds Averaged Navier-Stokes

  14. Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses

    NASA Astrophysics Data System (ADS)

    Rutkowska, Eva; Baker, Colin; Nahum, Alan

    2010-04-01

    A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.

  15. Tri-FAST Hardware-in-the-Loop Simulation. Volume I. Tri-FAST Hardware-in-the-Loop Simulation at the Advanced Simulation Center

    DTIC Science & Technology

    1979-03-28

    TECHNICAL REPORT T-79-43 TRI- FAST HARDWARE-IN-THE-LOOP SIMULATION Volume 1: Trn FAST Hardware-In-the. Loop Simulation at the Advanced Simulation...Identify by block number) Tri- FAST Hardware-in-the-Loop ACSL Advanced Simulation Center Simulation RF Target Models I a. AfIACT ( sin -oveme skit N nem...e n tdositr by block number) The purpose of this report is to document the Tri- FAST missile simulation development and the seeker hardware-in-the

  16. Study of Hydrokinetic Turbine Arrays with Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Sale, Danny; Aliseda, Alberto

    2014-11-01

    Marine renewable energy is advancing towards commercialization, including electrical power generation from ocean, river, and tidal currents. The focus of this work is to develop numerical simulations capable of predicting the power generation potential of hydrokinetic turbine arrays-this includes analysis of unsteady and averaged flow fields, turbulence statistics, and unsteady loadings on turbine rotors and support structures due to interaction with rotor wakes and ambient turbulence. The governing equations of large-eddy-simulation (LES) are solved using a finite-volume method, and the presence of turbine blades are approximated by the actuator-line method in which hydrodynamic forces are projected to the flow field as a body force. The actuator-line approach captures helical wake formation including vortex shedding from individual blades, and the effects of drag and vorticity generation from the rough seabed surface are accounted for by wall-models. This LES framework was used to replicate a previous flume experiment consisting of three hydrokinetic turbines tested under various operating conditions and array layouts. Predictions of the power generation, velocity deficit and turbulence statistics in the wakes are compared between the LES and experimental datasets.

  17. Geophysics Under Pressure: Large-Volume Presses Versus the Diamond-Anvil Cell

    NASA Astrophysics Data System (ADS)

    Hazen, R. M.

    2002-05-01

    Prior to 1970, the legacy of Harvard physicist Percy Bridgman dominated high-pressure geophysics. Massive presses with large-volume devices, including piston-cylinder, opposed-anvil, and multi-anvil configurations, were widely used in both science and industry to achieve a range of crustal and upper mantle temperatures and pressures. George Kennedy of UCLA was a particularly influential advocate of large-volume apparatus for geophysical research prior to his death in 1980. The high-pressure scene began to change in 1959 with the invention of the diamond-anvil cell, which was designed simultaneously and independently by John Jamieson at the University of Chicago and Alvin Van Valkenburg at the National Bureau of Standards in Washington, DC. The compact, inexpensive diamond cell achieved record static pressures and had the advantage of optical access to the high-pressure environment. Nevertheless, members of the geophysical community, who favored the substantial sample volumes, geothermally relevant temperature range, and satisfying bulk of large-volume presses, initially viewed the diamond cell with indifference or even contempt. Several factors led to a gradual shift in emphasis from large-volume presses to diamond-anvil cells in geophysical research during the 1960s and 1970s. These factors include (1) their relatively low cost at time of fiscal restraint, (2) Alvin Van Valkenburg's new position as a Program Director at the National Science Foundation in 1964 (when George Kennedy's proposal for a Nation High-Pressure Laboratory was rejected), (3) the development of lasers and micro-analytical spectroscopic techniques suitable for analyzing samples in a diamond cell, and (4) the attainment of record pressures (e.g., 100 GPa in 1975 by Mao and Bell at the Geophysical Laboratory). Today, a more balanced collaborative approach has been adopted by the geophysics and mineral physics community. Many high-pressure laboratories operate a new generation of less expensive

  18. Endoclips vs large or small-volume epinephrine in peptic ulcer recurrent bleeding

    PubMed Central

    Ljubicic, Neven; Budimir, Ivan; Biscanin, Alen; Nikolic, Marko; Supanc, Vladimir; Hrabar, Davor; Pavic, Tajana

    2012-01-01

    AIM: To compare the recurrent bleeding after endoscopic injection of different epinephrine volumes with hemoclips in patients with bleeding peptic ulcer. METHODS: Between January 2005 and December 2009, 150 patients with gastric or duodenal bleeding ulcer with major stigmata of hemorrhage and nonbleeding visible vessel in an ulcer bed (Forrest IIa) were included in the study. Patients were randomized to receive a small-volume epinephrine group (15 to 25 mL injection group; Group 1, n = 50), a large-volume epinephrine group (30 to 40 mL injection group; Group 2, n = 50) and a hemoclip group (Group 3, n = 50). The rate of recurrent bleeding, as the primary outcome, was compared between the groups of patients included in the study. Secondary outcomes compared between the groups were primary hemostasis rate, permanent hemostasis, need for emergency surgery, 30 d mortality, bleeding-related deaths, length of hospital stay and transfusion requirements. RESULTS: Initial hemostasis was obtained in all patients. The rate of early recurrent bleeding was 30% (15/50) in the small-volume epinephrine group (Group 1) and 16% (8/50) in the large-volume epinephrine group (Group 2) (P = 0.09). The rate of recurrent bleeding was 4% (2/50) in the hemoclip group (Group 3); the difference was statistically significant with regard to patients treated with either small-volume or large-volume epinephrine solution (P = 0.0005 and P = 0.045, respectively). Duration of hospital stay was significantly shorter among patients treated with hemoclips than among patients treated with epinephrine whereas there were no differences in transfusion requirement or even 30 d mortality between the groups. CONCLUSION: Endoclip is superior to both small and large volume injection of epinephrine in the prevention of recurrent bleeding in patients with peptic ulcer. PMID:22611315

  19. Large eddy simulation modeling of particle-laden flows in complex terrain

    NASA Astrophysics Data System (ADS)

    Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.

    2017-12-01

    The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.

  20. Development of large volume double ring penning plasma discharge source for efficient light emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd{sub 2}Fe{sub 14}B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic fieldmore » of {approx}0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is {approx}2 Multiplication-Sign 10{sup 11} cm{sup -3}, which is around one order of magnitude larger than that of single ring arrangement.« less

  1. Development of large volume double ring penning plasma discharge source for efficient light emissions.

    PubMed

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj; Prajapati, Jitendra; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana

    2012-12-01

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd(2)Fe(14)B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic field of ~0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is ~2 × 10(11) cm(-3), which is around one order of magnitude larger than that of single ring arrangement.

  2. Large volume multiple-path nuclear pumped laser

    NASA Technical Reports Server (NTRS)

    Hohl, F.; Deyoung, R. J. (Inventor)

    1981-01-01

    Large volumes of gas are excited by using internal high reflectance mirrors that are arranged so that the optical path crosses back and forth through the excited gaseous medium. By adjusting the external dielectric mirrors of the laser, the number of paths through the laser cavity can be varied. Output powers were obtained that are substantially higher than the output powers of previous nuclear laser systems.

  3. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  4. Response function and linearity for high energy γ-rays in large volume LaBr3:Ce detectors

    NASA Astrophysics Data System (ADS)

    Gosta, G.; Blasi, N.; Camera, F.; Million, B.; Giaz, A.; Wieland, O.; Rossi, F. M.; Utsunomiya, H.; Ari-izumi, T.; Takenaka, D.; Filipescu, D.; Gheorghe, I.

    2018-01-01

    The response function to high energy γ-rays of two large volume LaBr3:Ce crystals (3.5"x8") and the linearity of the coupled PMT's were investigated at the NewSUBARU facility, where γ-rays in the energy range 6-38 MeV were produced and sent into the detectors. Monte Carlo simulations were performed to reproduce the experimental spectra. The photopeak and interaction efficiencies were also evaluated both in case of a collimated beam and an isotropic source.

  5. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  6. Improvement of mathematical models for simulation of vehicle handling : volume 7 : technical manual for the general simulation

    DOT National Transportation Integrated Search

    1980-03-01

    This volume is the technical manual for the general simulation. Mathematical modelling of the vehicle and of the human driver is presented in detail, as are differences between the APL simulation and the current one. Information on model validation a...

  7. Parameter studies on the energy balance closure problem using large-eddy simulation

    NASA Astrophysics Data System (ADS)

    De Roo, Frederik; Banerjee, Tirtha; Mauder, Matthias

    2017-04-01

    The imbalance of the surface energy budget in eddy-covariance measurements is still a pending problem. A possible cause is the presence of land surface heterogeneity. Heterogeneities of the boundary layer scale or larger are most effective in influencing the boundary layer turbulence, and large-eddy simulations have shown that secondary circulations within the boundary layer can affect the surface energy budget. However, the precise influence of the surface characteristics on the energy imbalance and its partitioning is still unknown. To investigate the influence of surface variables on all the components of the flux budget under convective conditions, we set up a systematic parameter study by means of large-eddy simulation. For the study we use a virtual control volume approach, and we focus on idealized heterogeneity by considering spatially variable surface fluxes. The surface fluxes vary locally in intensity and these patches have different length scales. The main focus lies on heterogeneities of length scales of the kilometer scale and one decade smaller. For each simulation, virtual measurement towers are positioned at functionally different positions. We discriminate between the locally homogeneous towers, located within land use patches, with respect to the more heterogeneous towers, and find, among others, that the flux-divergence and the advection are strongly linearly related within each class. Furthermore, we seek correlators for the energy balance ratio and the energy residual in the simulations. Besides the expected correlation with measurable atmospheric quantities such as the friction velocity, boundary-layer depth and temperature and moisture gradients, we have also found an unexpected correlation with the temperature difference between sonic temperature and surface temperature. In additional simulations with a large number of virtual towers, we investigate higher order correlations, which can be linked to secondary circulations. In a companion

  8. Characteristics of the mixing volume model with the interactions among spatially distributed particles for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2016-11-01

    The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.

  9. Coupling of RF antennas to large volume helicon plasma

    NASA Astrophysics Data System (ADS)

    Chang, Lei; Hu, Xinyue; Gao, Lei; Chen, Wei; Wu, Xianming; Sun, Xinfeng; Hu, Ning; Huang, Chongxiang

    2018-04-01

    Large volume helicon plasma sources are of particular interest for large scale semiconductor processing, high power plasma propulsion and recently plasma-material interaction under fusion conditions. This work is devoted to studying the coupling of four typical RF antennas to helicon plasma with infinite length and diameter of 0.5 m, and exploring its frequency dependence in the range of 13.56-70 MHz for coupling optimization. It is found that loop antenna is more efficient than half helix, Boswell and Nagoya III antennas for power absorption; radially parabolic density profile overwhelms Gaussian density profile in terms of antenna coupling for low-density plasma, but the superiority reverses for high-density plasma. Increasing the driving frequency results in power absorption more near plasma edge, but the overall power absorption increases with frequency. Perpendicular stream plots of wave magnetic field, wave electric field and perturbed current are also presented. This work can serve as an important reference for the experimental design of large volume helicon plasma source with high RF power.

  10. Durham extremely large telescope adaptive optics simulation platform.

    PubMed

    Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard

    2007-03-01

    Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.

  11. Novel regenerative large-volume immobilized enzyme reactor: preparation, characterization and application.

    PubMed

    Ruan, Guihua; Wei, Meiping; Chen, Zhengyi; Su, Rihui; Du, Fuyou; Zheng, Yanjie

    2014-09-15

    A novel large-volume immobilized enzyme reactor (IMER) on small column was prepared with organic-inorganic hybrid silica particles and applied for fast (10 min) and oriented digestion of protein. At first, a thin enzyme support layer was formed in the bottom of the small column by polymerization with α-methacrylic acid and dimethacrylate. After that, amino SiO2 particles was prepared by the sol-gel method with tetraethoxysilane and 3-aminopropyltriethoxysilane. Subsequently, the amino SiO2 particles were activated by glutaraldehyde for covalent immobilization of trypsin. Digestive capability of large-volume IMER for proteins was investigated by using bovine serum albumin (BSA), cytochrome c (Cyt-c) as model proteins. Results showed that although the sequence coverage of the BSA (20%) and Cyt-c (19%) was low, the large-volume IMER could produce peptides with stable specific sequence at 101-105, 156-160, 205-209, 212-218, 229-232, 257-263 and 473-451 of the amino sequence of BSA when digesting 1mg/mL BSA. Eight of common peptides were observed during each of the ten runs of large-volume IMER. Besides, the IMER could be easily regenerated by reactivating with GA and cross-linking with trypsin after breaking the -C=N- bond by 0.01 M HCl. The sequence coverage of BSA from regenerated IMER increased to 25% comparing the non-regenerated IMER (17%). 14 common peptides. accounting for 87.5% of first use of IMER, were produced both with IMER and regenerated IMER. When the IMER was applied for ginkgo albumin digestion, the sequence coverage of two main proteins of ginkgo, ginnacin and legumin, was 56% and 55%, respectively. (Reviewer 2) Above all, the fast and selective digestion property of the large-volume IMER indicated that the regenerative IMER could be tentatively used for the production of potential bioactive peptides and the study of oriented protein digestion. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundquist, J K; Mirocha, J D; Chow, F K

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting thatmore » a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.« less

  13. Predicting viscous-range velocity gradient dynamics in large-eddy simulations of turbulence

    NASA Astrophysics Data System (ADS)

    Johnson, Perry; Meneveau, Charles

    2017-11-01

    The details of small-scale turbulence are not directly accessible in large-eddy simulations (LES), posing a modeling challenge because many important micro-physical processes depend strongly on the dynamics of turbulence in the viscous range. Here, we introduce a method for coupling existing stochastic models for the Lagrangian evolution of the velocity gradient tensor with LES to simulate unresolved dynamics. The proposed approach is implemented in LES of turbulent channel flow and detailed comparisons with DNS are carried out. An application to modeling the fate of deformable, small (sub-Kolmogorov) droplets at negligible Stokes number and low volume fraction with one-way coupling is carried out. These results illustrate the ability of the proposed model to predict the influence of small scale turbulence on droplet micro-physics in the context of LES. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.

  14. Geometric measures of large biomolecules: surface, volume, and pockets.

    PubMed

    Mach, Paul; Koehl, Patrice

    2011-11-15

    Geometry plays a major role in our attempts to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent O(n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 sec with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing analytically the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. Copyright © 2011 Wiley Periodicals, Inc.

  15. Geometric Measures of Large Biomolecules: Surface, Volume and Pockets

    PubMed Central

    Mach, Paul; Koehl, Patrice

    2011-01-01

    Geometry plays a major role in our attempt to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent (n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 seconds with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. PMID:21823134

  16. Implementation of a roughness element to trip transition in large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Boudet, J.; Monier, J.-F.; Gao, F.

    2015-02-01

    In aerodynamics, the laminar or turbulent regime of a boundary layer has a strong influence on friction or heat transfer. In practical applications, it is sometimes necessary to trip the transition to turbulent, and a common way is by use of a roughness element ( e.g. a step) on the wall. The present paper is concerned with the numerical implementation of such a trip in large-eddy simulations. The study is carried out on a flat-plate boundary layer configuration, with Reynolds number Rex=1.3×106. First, this work brings the opportunity to introduce a practical methodology to assess convergence in large-eddy simulations. Second, concerning the trip implementation, a volume source term is proposed and is shown to yield a smoother and faster transition than a grid step. Moreover, it is easier to implement and more adaptable. Finally, two subgrid-scale models are tested: the WALE model of Nicoud and Ducros ( Flow Turbul. Combust., vol. 62, 1999) and the shear-improved Smagorinsky model of Lévêque et al. ( J. Fluid Mech., vol. 570, 2007). Both models allow transition, but the former appears to yield a faster transition and a better prediction of friction in the turbulent regime.

  17. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree

  18. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, M.; Wieseman, C. D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  19. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  20. The cavitation erosion of ultrasonic sonotrode during large-scale metallic casting: Experiment and simulation.

    PubMed

    Tian, Yang; Liu, Zhilin; Li, Xiaoqian; Zhang, Lihua; Li, Ruiqing; Jiang, Ripeng; Dong, Fang

    2018-05-01

    Ultrasonic sonotrodes play an essential role in transmitting power ultrasound into the large-scale metallic casting. However, cavitation erosion considerably impairs the in-service performance of ultrasonic sonotrodes, leading to marginal microstructural refinement. In this work, the cavitation erosion behaviour of ultrasonic sonotrodes in large-scale castings was explored using the industry-level experiments of Al alloy cylindrical ingots (i.e. 630 mm in diameter and 6000 mm in length). When introducing power ultrasound, severe cavitation erosion was found to reproducibly occur at some specific positions on ultrasonic sonotrodes. However, there is no cavitation erosion present on the ultrasonic sonotrodes that were not driven by electric generator. Vibratory examination showed cavitation erosion depended on the vibration state of ultrasonic sonotrodes. Moreover, a finite element (FE) model was developed to simulate the evolution and distribution of acoustic pressure in 3-D solidification volume. FE simulation results confirmed that significant dynamic interaction between sonotrodes and melts only happened at some specific positions corresponding to severe cavitation erosion. This work will allow for developing more advanced ultrasonic sonotrodes with better cavitation erosion-resistance, in particular for large-scale castings, from the perspectives of ultrasonic physics and mechanical design. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Large-eddy simulation of human-induced contaminant transport in room compartments.

    PubMed

    Choi, J-I; Edwards, J R

    2012-02-01

    A large-eddy simulation is used to investigate contaminant transport owing to complex human and door motions and vent-system activity in room compartments where a contaminated and clean room are connected by a vestibule. Human and door motions are simulated with an immersed boundary procedure. We demonstrate the details of contaminant transport owing to human- and door-motion-induced wake development during a short-duration event involving the movement of a person (or persons) from a contaminated room, through a vestibule, into a clean room. Parametric studies that capture the effects of human walking pattern, door operation, over-pressure level, and vestibule size are systematically conducted. A faster walking speed results in less mass transport from the contaminated room into the clean room. The net effect of increasing the volume of the vestibule is to reduce the contaminant transport. The results show that swinging-door motion is the dominant transport mechanism and that human-induced wake motion enhances compartment-to-compartment transport. The effect of human activity on contaminant transport may be important in design and operation of clean or isolation rooms in chemical or pharmaceutical industries and intensive care units for airborne infectious disease control in a hospital. The present simulations demonstrate details of contaminant transport in such indoor environments during human motion events and show that simulation-based sensitivity analysis can be utilized for the diagnosis of contaminant infiltration and for better environmental protection. © 2011 John Wiley & Sons A/S.

  2. GPU Accelerated DG-FDF Large Eddy Simulator

    NASA Astrophysics Data System (ADS)

    Inkarbekov, Medet; Aitzhan, Aidyn; Sammak, Shervin; Givi, Peyman; Kaltayev, Aidarkhan

    2017-11-01

    A GPU accelerated simulator is developed and implemented for large eddy simulation (LES) of turbulent flows. The filtered density function (FDF) is utilized for modeling of the subgrid scale quantities. The filtered transport equations are solved via a discontinuous Galerkin (DG) and the FDF is simulated via particle based Lagrangian Monte-Carlo (MC) method. It is demonstrated that the GPUs simulations are of the order of 100 times faster than the CPU-based calculations. This brings LES of turbulent flows to a new level, facilitating efficient simulation of more complex problems. The work at Al-Faraby Kazakh National University is sponsored by MoES of RK under Grant 3298/GF-4.

  3. Simulation and optimization of volume holographic imaging systems in Zemax.

    PubMed

    Wissmann, Patrick; Oh, Se Baek; Barbastathis, George

    2008-05-12

    We present a new methodology for ray-tracing analysis of volume holographic imaging (VHI) systems. Using the k-sphere formulation, we apply geometrical relationships to describe the volumetric diffraction effects imposed on rays passing through a volume hologram. We explain the k-sphere formulation in conjunction with ray tracing process and describe its implementation in a Zemax UDS (User Defined Surface). We conclude with examples of simulation and optimization results and show proof of consistency and usefulness of the proposed model.

  4. High Fidelity Simulations of Large-Scale Wireless Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onunkwo, Uzoma; Benz, Zachary

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulationsmore » (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.« less

  5. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  6. Molecular dynamics simulations of large macromolecular complexes.

    PubMed

    Perilla, Juan R; Goh, Boon Chong; Cassidy, C Keith; Liu, Bo; Bernardi, Rafael C; Rudack, Till; Yu, Hang; Wu, Zhe; Schulten, Klaus

    2015-04-01

    Connecting dynamics to structural data from diverse experimental sources, molecular dynamics simulations permit the exploration of biological phenomena in unparalleled detail. Advances in simulations are moving the atomic resolution descriptions of biological systems into the million-to-billion atom regime, in which numerous cell functions reside. In this opinion, we review the progress, driven by large-scale molecular dynamics simulations, in the study of viruses, ribosomes, bioenergetic systems, and other diverse applications. These examples highlight the utility of molecular dynamics simulations in the critical task of relating atomic detail to the function of supramolecular complexes, a task that cannot be achieved by smaller-scale simulations or existing experimental approaches alone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Indian LSSC (Large Space Simulation Chamber) facility

    NASA Technical Reports Server (NTRS)

    Brar, A. S.; Prasadarao, V. S.; Gambhir, R. D.; Chandramouli, M.

    1988-01-01

    The Indian Space Agency has undertaken a major project to acquire in-house capability for thermal and vacuum testing of large satellites. This Large Space Simulation Chamber (LSSC) facility will be located in Bangalore and is to be operational in 1989. The facility is capable of providing 4 meter diameter solar simulation with provision to expand to 4.5 meter diameter at a later date. With such provisions as controlled variations of shroud temperatures and availability of infrared equipment as alternative sources of thermal radiation, this facility will be amongst the finest anywhere. The major design concept and major aspects of the LSSC facility are presented here.

  8. Comparison of volume and surface area nonpolar solvation free energy terms for implicit solvent simulations.

    PubMed

    Lee, Michael S; Olson, Mark A

    2013-07-28

    Implicit solvent models for molecular dynamics simulations are often composed of polar and nonpolar terms. Typically, the nonpolar solvation free energy is approximated by the solvent-accessible-surface area times a constant factor. More sophisticated approaches incorporate an estimate of the attractive dispersion forces of the solvent and∕or a solvent-accessible volume cavitation term. In this work, we confirm that a single volume-based nonpolar term most closely fits the dispersion and cavitation forces obtained from benchmark explicit solvent simulations of fixed protein conformations. Next, we incorporated the volume term into molecular dynamics simulations and find the term is not universally suitable for folding up small proteins. We surmise that while mean-field cavitation terms such as volume and SASA often tilt the energy landscape towards native-like folds, they also may sporadically introduce bottlenecks into the folding pathway that hinder the progression towards the native state.

  9. Comparison of volume and surface area nonpolar solvation free energy terms for implicit solvent simulations

    NASA Astrophysics Data System (ADS)

    Lee, Michael S.; Olson, Mark A.

    2013-07-01

    Implicit solvent models for molecular dynamics simulations are often composed of polar and nonpolar terms. Typically, the nonpolar solvation free energy is approximated by the solvent-accessible-surface area times a constant factor. More sophisticated approaches incorporate an estimate of the attractive dispersion forces of the solvent and/or a solvent-accessible volume cavitation term. In this work, we confirm that a single volume-based nonpolar term most closely fits the dispersion and cavitation forces obtained from benchmark explicit solvent simulations of fixed protein conformations. Next, we incorporated the volume term into molecular dynamics simulations and find the term is not universally suitable for folding up small proteins. We surmise that while mean-field cavitation terms such as volume and SASA often tilt the energy landscape towards native-like folds, they also may sporadically introduce bottlenecks into the folding pathway that hinder the progression towards the native state.

  10. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  11. Enhanced FIB-SEM systems for large-volume 3D imaging

    DOE PAGES

    Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan; ...

    2017-05-13

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less

  12. Enhanced FIB-SEM systems for large-volume 3D imaging.

    PubMed

    Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F

    2017-05-13

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 µm 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.

  13. Low energy prompt gamma-ray tests of a large volume BGO detector.

    PubMed

    Naqvi, A A; Kalakada, Zameer; Al-Anezi, M S; Raashid, M; Khateeb-ur-Rehman; Maslehuddin, M; Garwan, M A

    2012-01-01

    Tests of a large volume Bismuth Germinate (BGO) detector were carried out to detect low energy prompt gamma-rays from boron and cadmium-contaminated water samples using a portable neutron generator-based Prompt Gamma Neutron Activation Analysis (PGNAA) setup. Inspite of strong interference between the sample- and the detector-associated prompt gamma-rays, an excellent agreement has been observed between the experimental and calculated yields of the prompt gamma-rays, indicating successful application of the large volume BGO detector in the PGNAA analysis of bulk samples using low energy prompt gamma-rays. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Automation Applications in an Advanced Air Traffic Management System : Volume 5B. DELTA Simulation Model - Programmer's Guide.

    DOT National Transportation Integrated Search

    1974-08-01

    Volume 5 describes the DELTA Simulation Model. It includes all documentation of the DELTA (Determine Effective Levels of Task Automation) computer simulation developed by TRW for use in the Automation Applications Study. Volume 5A includes a user's m...

  15. Multi-hadron spectroscopy in a large physical volume

    NASA Astrophysics Data System (ADS)

    Bulava, John; Hörz, Ben; Morningstar, Colin

    2018-03-01

    We demonstrate the effcacy of the stochastic LapH method to treat all-toall quark propagation on a Nf = 2 + 1 CLS ensemble with large linear spatial extent L = 5:5 fm, allowing us to obtain the benchmark elastic isovector p-wave pion-pion scattering amplitude to good precision already on a relatively small number of gauge configurations. These results hold promise for multi-hadron spectroscopy at close-to-physical pion mass with exponential finite-volume effects under control.

  16. Large Eddy Simulation of Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Wu, Ting; Cotton, William R.

    1999-01-01

    The Regional Atmospheric Modeling System (RAMS) with mesoscale interactive nested-grids and a Large-Eddy Simulation (LES) version of RAMS, coupled to two-moment microphysics and a new two-stream radiative code were used to investigate the dynamic, microphysical, and radiative aspects of the November 26, 1991 cirrus event. Wu (1998) describes the results of that research in full detail and is enclosed as Appendix 1. The mesoscale nested grid simulation successfully reproduced the large scale circulation as compared to the Mesoscale Analysis and Prediction System's (MAPS) analyses and other observations. Three cloud bands which match nicely to the three cloud lines identified in an observational study (Mace et al., 1995) are predicted on Grid #2 of the nested grids, even though the mesoscale simulation predicts a larger west-east cloud width than what was observed. Large-eddy simulations (LES) were performed to study the dynamical, microphysical, and radiative processes in the 26 November 1991 FIRE 11 cirrus event. The LES model is based on the RAMS version 3b developed at Colorado State University. It includes a new radiation scheme developed by Harrington (1997) and a new subgrid scale model developed by Kosovic (1996). The LES model simulated a single cloud layer for Case 1 and a two-layer cloud structure for Case 2. The simulations demonstrated that latent heat release can play a significant role in the formation and development of cirrus clouds. For the thin cirrus in Case 1, the latent heat release was insufficient for the cirrus clouds to become positively buoyant. However, in some special cases such as Case 2, positively buoyant cells can be embedded within the cirrus layers. These cells were so active that the rising updraft induced its own pressure perturbations that affected the cloud evolution. Vertical profiles of the total radiative and latent heating rates indicated that for well developed, deep, and active cirrus clouds, radiative cooling and latent

  17. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  18. Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach

    NASA Technical Reports Server (NTRS)

    Menon, S.; Feiz, H.

    1990-01-01

    Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.

  19. Training Community Modeling and Simulation Business Plan, 2007 Edition. Volume 1: Review of Training Capabilities

    DTIC Science & Technology

    2009-02-01

    Simulation Business Plan, 2007 Edition Volume I: Review of Training Capabilities J.D. Fletcher, IDA Frederick E. Hartman , IDA Robert Halayko, Addx Corp...Community Modeling and Simulation Business Plan, 2007 Edition Volume I: Review of Training Capabilities J.D. Fletcher, IDA Frederick E. Hartman , IDA...Steering Committee for the training community led by the Office of the Under Secretary of Defense (Personnel and Readiness), OUSD( P &R). The task was

  20. Enhanced FIB-SEM systems for large-volume 3D imaging

    PubMed Central

    Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F

    2017-01-01

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 106 µm3. These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology. DOI: http://dx.doi.org/10.7554/eLife.25916.001 PMID:28500755

  1. Airport Landside. Volume II. The Airport Landside Simulation Model (ALSIM) Description and Users Guide.

    DOT National Transportation Integrated Search

    1982-06-01

    This volume provides a general description of the Airport Landside Simulation Model. A summary of simulated passenger and vehicular processing through the landside is presented. Program operating characteristics and assumptions are documented and a c...

  2. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    PubMed

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  3. Geant4-DNA track-structure simulations for gold nanoparticles: The importance of electron discrete models in nanometer volumes.

    PubMed

    Sakata, Dousatsu; Kyriakou, Ioanna; Okada, Shogo; Tran, Hoang N; Lampe, Nathanael; Guatelli, Susanna; Bordage, Marie-Claude; Ivanchenko, Vladimir; Murakami, Koichi; Sasaki, Takashi; Emfietzoglou, Dimitris; Incerti, Sebastien

    2018-05-01

    Gold nanoparticles (GNPs) are known to enhance the absorbed dose in their vicinity following photon-based irradiation. To investigate the therapeutic effectiveness of GNPs, previous Monte Carlo simulation studies have explored GNP dose enhancement using mostly condensed-history models. However, in general, such models are suitable for macroscopic volumes and for electron energies above a few hundred electron volts. We have recently developed, for the Geant4-DNA extension of the Geant4 Monte Carlo simulation toolkit, discrete physics models for electron transport in gold which include the description of the full atomic de-excitation cascade. These models allow event-by-event simulation of electron tracks in gold down to 10 eV. The present work describes how such specialized physics models impact simulation-based studies on GNP-radioenhancement in a context of x-ray radiotherapy. The new discrete physics models are compared to the Geant4 Penelope and Livermore condensed-history models, which are being widely used for simulation-based NP radioenhancement studies. An ad hoc Geant4 simulation application has been developed to calculate the absorbed dose in liquid water around a GNP and its radioenhancement, caused by secondary particles emitted from the GNP itself, when irradiated with a monoenergetic electron beam. The effect of the new physics models is also quantified in the calculation of secondary particle spectra, when originating in the GNP and when exiting from it. The new physics models show similar backscattering coefficients with the existing Geant4 Livermore and Penelope models in large volumes for 100 keV incident electrons. However, in submicron sized volumes, only the discrete models describe the high backscattering that should still be present around GNPs at these length scales. Sizeable differences (mostly above a factor of 2) are also found in the radial distribution of absorbed dose and secondary particles between the new and the existing Geant4

  4. Large Scale Simulation Platform for NODES Validation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sotorrio, P.; Qin, Y.; Min, L.

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and lightmore » commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.« less

  5. Patient-specific coronary artery blood flow simulation using myocardial volume partitioning

    NASA Astrophysics Data System (ADS)

    Kim, Kyung Hwan; Kang, Dongwoo; Kang, Nahyup; Kim, Ji-Yeon; Lee, Hyong-Euk; Kim, James D. K.

    2013-03-01

    Using computational simulation, we can analyze cardiovascular disease in non-invasive and quantitative manners. More specifically, computational modeling and simulation technology has enabled us to analyze functional aspect such as blood flow, as well as anatomical aspect such as stenosis, from medical images without invasive measurements. Note that the simplest ways to perform blood flow simulation is to apply patient-specific coronary anatomy with other average-valued properties; in this case, however, such conditions cannot fully reflect accurate physiological properties of patients. To resolve this limitation, we present a new patient-specific coronary blood flow simulation method by myocardial volume partitioning considering artery/myocardium structural correspondence. We focus on that blood supply is closely related to the mass of each myocardial segment corresponding to the artery. Therefore, we applied this concept for setting-up simulation conditions in the way to consider many patient-specific features as possible from medical image: First, we segmented coronary arteries and myocardium separately from cardiac CT; then the myocardium is partitioned into multiple regions based on coronary vasculature. The myocardial mass and required blood mass for each artery are estimated by converting myocardial volume fraction. Finally, the required blood mass is used as boundary conditions for each artery outlet, with given average aortic blood flow rate and pressure. To show effectiveness of the proposed method, fractional flow reserve (FFR) by simulation using CT image has been compared with invasive FFR measurement of real patient data, and as a result, 77% of accuracy has been obtained.

  6. Simulation of large-scale rule-based models

    PubMed Central

    Colvin, Joshua; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.; Von Hoff, Daniel D.; Posner, Richard G.

    2009-01-01

    Motivation: Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. Results: DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein–protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of StochSim. DYNSTOC differs from StochSim by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. Availability: DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at http://public.tgen.org/dynstoc/. Contact: dynstoc@tgen.org Supplementary information

  7. Simulating potential water grabbing from large-scale land acquisitions in Africa}

    NASA Astrophysics Data System (ADS)

    Li Johansson, Emma; Fader, Marianela; Seaquist, Jonathan W.; Nicholas, Kimberly A.

    2017-04-01

    The potential high level of water appropriation in Africa by foreign companies might pose high socioenvironmental challenges, including overconsumption of water and conflicts and tensions over water resources allocation. We will present a study published recently in the Proceedings of the National Academy of Sciences11 of the USA, where we simulated green and blue water demand and crop yields of large-scale land acquisitions in several African countries. Green water refers to precipitation stored in soils and consumed by plants through evapotranspiration, while blue water is extracted from rivers, lakes, aquifers, and dams. We simulated seven irrigation scenarios, and compared these data with two baseline scenarios of staple crops representing previous water demand. The results indicate that the green and blue water use is 39% and 76-86% greater, respectively, for crops grown on acquired land compared with the baseline of common staple crops, showing that land acquisitions substantially increase water demands. We also found that most land acquisitions are planted with crops such as sugarcane, jatropha, and eucalyptus, that demand volumes of water >9,000 m3ṡha-1. And even if the most efficient irrigation systems were implemented, 18% of the land acquisitions, totaling 91,000 ha, would still require more than 50% of water from blue water sources.

  8. Large eddy simulations of compressible magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Grete, Philipp

    2017-02-01

    Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the

  9. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merzari, Elia; Obabko, Aleks; Fischer, Paul

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less

  10. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    DOE PAGES

    Merzari, Elia; Obabko, Aleks; Fischer, Paul; ...

    2016-11-03

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less

  11. Large Eddy Simulation of Crashback in Marine Propulsors

    NASA Astrophysics Data System (ADS)

    Jang, Hyunchul

    Crashback is an operating condition to quickly stop a propelled vehicle, where the propeller is rotated in the reverse direction to yield negative thrust. The crashback condition is dominated by the interaction of the free stream flow with the strong reverse flow. This interaction forms a highly unsteady vortex ring, which is a very prominent feature of crashback. Crashback causes highly unsteady loads and flow separation on the blade surface. The unsteady loads can cause propulsor blade damage, and also affect vehicle maneuverability. Crashback is therefore well known as one of the most challenging propeller states to analyze. This dissertation uses Large-Eddy Simulation (LES) to predict the highly unsteady flow field in crashback. A non-dissipative and robust finite volume method developed by Mahesh et al. (2004) for unstructured grids is applied to flow around marine propulsors. The LES equations are written in a rotating frame of reference. The objectives of this dissertation are: (1) to understand the flow physics of crashback in marine propulsors with and without a duct, (2) to develop a finite volume method for highly skewed meshes which usually occur in complex propulsor geometries, and (3) to develop a sliding interface method for simulations of rotor-stator propulsor on parallel platforms. LES is performed for an open propulsor in crashback and validated against experiments performed by Jessup et al. (2004). The LES results show good agreement with experiments. Effective pressures for thrust and side-force are introduced to more clearly understand the physical sources of thrust and side-force. Both thrust and side-force are seen to be mainly generated from the leading edge of the suction side of the propeller. This implies that thrust and side-force have the same source---the highly unsteady leading edge separation. Conditional averaging is performed to obtain quantitative information about the complex flow physics of high- or low-amplitude events. The

  12. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  13. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onunkwo, Uzoma

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutualmore » benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.« less

  14. Simulations and experiments of aperiodic and multiplexed gratings in volume holographic imaging systems

    PubMed Central

    Luo, Yuan; Castro, Jose; Barton, Jennifer K.; Kostuk, Raymond K.; Barbastathis, George

    2010-01-01

    A new methodology describing the effects of aperiodic and multiplexed gratings in volume holographic imaging systems (VHIS) is presented. The aperiodic gratings are treated as an ensemble of localized planar gratings using coupled wave methods in conjunction with sequential and non-sequential ray-tracing techniques to accurately predict volumetric diffraction effects in VHIS. Our approach can be applied to aperiodic, multiplexed gratings and used to theoretically predict the performance of multiplexed volume holographic gratings within a volume hologram for VHIS. We present simulation and experimental results for the aperiodic and multiplexed imaging gratings formed in PQ-PMMA at 488nm and probed with a spherical wave at 633nm. Simulation results based on our approach that can be easily implemented in ray-tracing packages such as Zemax® are confirmed with experiments and show proof of consistency and usefulness of the proposed models. PMID:20940823

  15. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  16. Large-Eddy Simulation of Propeller Crashback

    NASA Astrophysics Data System (ADS)

    Kumar, Praveen; Mahesh, Krishnan

    2013-11-01

    Crashback is an operating condition to quickly stop a propelled vehicle, where the propeller is rotated in the reverse direction to yield negative thrust. The crashback condition is dominated by the interaction of free stream flow with strong reverse flow. Crashback causes highly unsteady loads and flow separation on blade surface. This study uses Large-Eddy Simulation to predict the highly unsteady flow field in propeller crashback. Results are shown for a stand-alone open propeller, hull-attached open propeller and a ducted propeller. The simulations are compared to experiment, and used to discuss the essential physics behind the unsteady loads. This work is supported by the Office of Naval Research.

  17. A device for controlled jet injection of large volumes of liquid.

    PubMed

    Mckeage, James W; Ruddy, Bryan P; Nielsen, Poul M F; Taberner, Andrew J

    2016-08-01

    We present a needle-free jet injection device controllably actuated by a voice coil and capable of injecting up to 1.3 mL. This device is used to perform jet injections of ~900 μL into porcine tissue. This is the first time that delivery of such a large volume has been reported using an electronically controllable device. The controllability of this device is demonstrated with a series of ejections where the desired volume is ejected to within 1 % during an injection at a predetermined jet velocity.

  18. A survey of electric and hybrid vehicles simulation programs. Volume 2: Questionnaire responses

    NASA Technical Reports Server (NTRS)

    Bevan, J.; Heimburger, D. A.; Metcalfe, M. A.

    1978-01-01

    The data received in a survey conducted within the United States to determine the extent of development and capabilities of automotive performance simulation programs suitable for electric and hybrid vehicle studies are presented. The survey was conducted for the Department of Energy by NASA's Jet Propulsion Laboratory. Volume 1 of this report summarizes and discusses the results contained in Volume 2.

  19. Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise

    NASA Astrophysics Data System (ADS)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2014-11-01

    Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.

  20. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  1. Efficient Voronoi volume estimation for DEM simulations of granular materials under confined conditions

    PubMed Central

    Frenning, Göran

    2015-01-01

    When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975

  2. Simulations of Large-Area Electron Beam Diodes

    NASA Astrophysics Data System (ADS)

    Swanekamp, S. B.; Friedman, M.; Ludeking, L.; Smithe, D.; Obenschain, S. P.

    1999-11-01

    Large area electron beam diodes are typically used to pump the amplifiers of KrF lasers. Simulations of large-area electron beam diodes using the particle-in-cell code MAGIC3D have shown the electron flow in the diode to be unstable. Since this instability can potentially produce a non-uniform current and energy distribution in the hibachi structure and lasing medium it can be detrimental to laser efficiency. These results are similar to simulations performed using the ISIS code.(M.E. Jones and V.A. Thomas, Proceedings of the 8^th) International Conference on High-Power Particle Beams, 665 (1990). We have identified the instability as the so called ``transit-time" instability(C.K. Birdsall and W.B. Bridges, Electrodynamics of Diode Regions), (Academic Press, New York, 1966).^,(T.M. Antonsen, W.H. Miner, E. Ott, and A.T. Drobot, Phys. Fluids 27), 1257 (1984). and have investigated the role of the applied magnetic field and diode geometry. Experiments are underway to characterize the instability on the Nike KrF laser system and will be compared to simulation. Also some possible ways to mitigate the instability will be presented.

  3. Stochastic locality and master-field simulations of very large lattices

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2018-03-01

    In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.

  4. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform

  5. Choice of the replacement fluid during large volume plasma-exchange.

    PubMed

    Nydegger, U E

    1983-01-01

    The replacement fluid used during therapeutic large volume plasma-exchange can be seen as an important factor influencing the result of such treatment. The choice includes fluids such as electrolyte solutions, gelatin, hydroxyethyl-starch, albumin and fresh frozen plasma. By evaluating the pathophysiology of the underlying disease, it is possible to choose between merely replacing the removed volume by non-protein fluids or rather to introduce plasma protein components into the patient's circulation by substituting with purified or enriched proteins such as albumin, clotting factors, antithrombin III or fresh frozen plasma. This paper analyzes the rationale for the choice of the appropriate replacement fluid taking into account pathophysiologic, pharmacologic and logistic criteria.

  6. Three-Dimensional Cell Printing of Large-Volume Tissues: Application to Ear Regeneration.

    PubMed

    Lee, Jung-Seob; Kim, Byoung Soo; Seo, Donghwan; Park, Jeong Hun; Cho, Dong-Woo

    2017-03-01

    The three-dimensional (3D) printing of large-volume cells, printed in a clinically relevant size, is one of the most important challenges in the field of tissue engineering. However, few studies have reported the fabrication of large-volume cell-printed constructs (LCCs). To create LCCs, appropriate fabrication conditions should be established: Factors involved include fabrication time, residence time, and temperature control of the cell-laden hydrogel in the syringe to ensure high cell viability and functionality. The prolonged time required for 3D printing of LCCs can reduce cell viability and result in insufficient functionality of the construct, because the cells are exposed to a harsh environment during the printing process. In this regard, we present an advanced 3D cell-printing system composed of a clean air workstation, a humidifier, and a Peltier system, which provides a suitable printing environment for the production of LCCs with high cell viability. We confirmed that the advanced 3D cell-printing system was capable of providing enhanced printability of hydrogels and fabricating an ear-shaped LCC with high cell viability. In vivo results for the ear-shaped LCC also showed that printed chondrocytes proliferated sufficiently and differentiated into cartilage tissue. Thus, we conclude that the advanced 3D cell-printing system is a versatile tool to create cell-printed constructs for the generation of large-volume tissues.

  7. Evaluation of Bacillus oleronius as a Biological Indicator for Terminal Sterilization of Large-Volume Parenterals.

    PubMed

    Izumi, Masamitsu; Fujifuru, Masato; Okada, Aki; Takai, Katsuya; Takahashi, Kazuhiro; Udagawa, Takeshi; Miyake, Makoto; Naruyama, Shintaro; Tokuda, Hiroshi; Nishioka, Goro; Yoden, Hikaru; Aoki, Mitsuo

    2016-01-01

    In the production of large-volume parenterals in Japan, equipment and devices such as tanks, pipework, and filters used in production processes are exhaustively cleaned and sterilized, and the cleanliness of water for injection, drug materials, packaging materials, and manufacturing areas is well controlled. In this environment, the bioburden is relatively low, and less heat resistant compared with microorganisms frequently used as biological indicators such as Geobacillus stearothermophilus (ATCC 7953) and Bacillus subtilis 5230 (ATCC 35021). Consequently, the majority of large-volume parenteral solutions in Japan are manufactured under low-heat sterilization conditions of F0 <2 min, so that loss of clarity of solutions and formation of degradation products of constituents are minimized. Bacillus oleronius (ATCC 700005) is listed as a biological indicator in "Guidance on the Manufacture of Sterile Pharmaceutical Products Produced by Terminal Sterilization" (guidance in Japan, issued in 2012). In this study, we investigated whether B. oleronius is an appropriate biological indicator of the efficacy of low-heat, moist-heat sterilization of large-volume parenterals. Specifically, we investigated the spore-forming ability of this microorganism in various cultivation media and measured the D-values and z-values as parameters of heat resistance. The D-values and z-values changed depending on the constituents of large-volume parenteral products. Also, the spores from B. oleronius showed a moist-heat resistance that was similar to or greater than many of the spore-forming organisms isolated from Japanese parenteral manufacturing processes. Taken together, these results indicate that B. oleronius is suitable as a biological indicator for sterility assurance of large-volume parenteral solutions subjected to low-heat, moist-heat terminal sterilization. © PDA, Inc. 2016.

  8. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, G.D.

    1998-11-24

    Microwave injection methods are disclosed for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant ``volume`` ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources. 5 figs.

  9. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, Gerald D.

    1998-01-01

    Microwave injection methods for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant "volume" ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources.

  10. Simulation requirements for the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Soosaar, K.

    1984-01-01

    Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.

  11. Large Eddy Simulation of Turbulent Combustion

    DTIC Science & Technology

    2006-03-15

    described accurately by the skeletal mechanism , usually the major reactants and products, NO and NO2 if we are interested in NOx formation, and any...LARGE EDDY SIMULATION OF TURBULENT COMBUSTION Principle Investigator: Heinz Pitsch Flow Physics and Computation Department of Mechanical Engineering ...are identified. These de- tailed mechanisms are reduced independently for various conditions and accuracy requirements. The skeletal mechanisms form

  12. Plasmoids formation in a laboratory and large-volume flux closure during simulations of Coaxial Helicity Injection in NSTX-U

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Fatima

    2016-10-01

    In NSTX-U, transient Coaxial Helicity Injection (CHI) is the primary method for current generation without reliance on the solenoid. A CHI discharge is generated by driving current along open field lines (the injector flux) that connect the inner and outer divertor plates on NSTX/NSTX-U, and has generated over 200 kA of toroidal current on closed flux surfaces in NSTX. Extrapolation of the concept to larger devices requires an improved understanding of the physics of flux closure and the governing parameters that maximizes the fraction of injected flux that is converted to useful closed flux. Here, through comprehensive resistive MHD NIMROD simulations conducted for the NSTX and NSTX-U geometries, two new major findings will be reported. First, formation of an elongated Sweet-Parker current sheet and a transition to plasmoid instability has for the first time been demonstrated by realistic global simulations. This is the first observation of plasmoid instability in a laboratory device configuration predicted by realistic MHD simulations and then supported by experimental camera images from NSTX. Second, simulations have now, for the first time, been able to show large fraction conversion of injected open flux to closed flux in the NSTX-U geometry. Consistent with the experiment, simulations also show that reconnection could occur at every stage of the helicity injection phase. The influence of 3D effects, and the parameter range that supports these important new findings is now being studied to understand the impact of toroidal magnetic field and the electron temperature, both of which are projected to increase in larger ST devices. Work supported by DOE DE-SC0010565.

  13. Very Large Area/Volume Microwave ECR Plasma and Ion Source

    NASA Technical Reports Server (NTRS)

    Foster, John E. (Inventor); Patterson, Michael J. (Inventor)

    2009-01-01

    The present invention is an apparatus and method for producing very large area and large volume plasmas. The invention utilizes electron cyclotron resonances in conjunction with permanent magnets to produce dense, uniform plasmas for long life ion thruster applications or for plasma processing applications such as etching, deposition, ion milling and ion implantation. The large area source is at least five times larger than the 12-inch wafers being processed to date. Its rectangular shape makes it easier to accommodate to materials processing than sources that are circular in shape. The source itself represents the largest ECR ion source built to date. It is electrodeless and does not utilize electromagnets to generate the ECR magnetic circuit, nor does it make use of windows.

  14. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  15. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  16. The big fat LARS - a LArge Reservoir Simulator for hydrate formation and gas production

    NASA Astrophysics Data System (ADS)

    Beeskow-Strauch, Bettina; Spangenberg, Erik; Schicks, Judith M.; Giese, Ronny; Luzi-Helbing, Manja; Priegnitz, Mike; Klump, Jens; Thaler, Jan; Abendroth, Sven

    2013-04-01

    Simulating natural scenarios on lab scale is a common technique to gain insight into geological processes with moderate effort and expenses. Due to the remote occurrence of gas hydrates, their behavior in sedimentary deposits is largely investigated on experimental set ups in the laboratory. In the framework of the submarine gas hydrate research project (SUGAR) a large reservoir simulator (LARS) with an internal volume of 425 liter has been designed, built and tested. To our knowledge this is presently a word-wide unique set up. Because of its large volume it is suitable for pilot plant scale tests on hydrate behavior in sediments. That includes not only the option of systematic tests on gas hydrate formation in various sedimentary settings but also the possibility to mimic scenarios for the hydrate decomposition and subsequent natural gas extraction. Based on these experimental results various numerical simulations can be realized. Here, we present the design and the experimental set up of LARS. The prerequisites for the simulation of a natural gas hydrate reservoir are porous sediments, methane, water, low temperature and high pressure. The reservoir is supplied by methane-saturated and pre-cooled water. For its preparation an external gas-water mixing stage is available. The methane-loaded water is continuously flushed into LARS as finely dispersed fluid via bottom-and-top-located sparger. The LARS is equipped with a mantle cooling system and can be kept at a chosen set temperature. The temperature distribution is monitored at 14 reasonable locations throughout the reservoir by Pt100 sensors. Pressure needs are realized using syringe pump stands. A tomographic system, consisting of a 375-electrode-configuration is attached to the mantle for the monitoring of hydrate distribution throughout the entire reservoir volume. Two sets of tubular polydimethylsiloxan-membranes are applied to determine gas-water ratio within the reservoir using the effect of permeability

  17. Large-eddy simulation of sand dune morphodynamics

    NASA Astrophysics Data System (ADS)

    Khosronejad, Ali; Sotiropoulos, Fotis; St. Anthony Falls Laboratory, University of Minnesota Team

    2015-11-01

    Sand dunes are natural features that form under complex interaction between turbulent flow and bed morphodynamics. We employ a fully-coupled 3D numerical model (Khosronejad and Sotiropoulos, 2014, Journal of Fluid Mechanics, 753:150-216) to perform high-resolution large-eddy simulations of turbulence and bed morphodynamics in a laboratory scale mobile-bed channel to investigate initiation, evolution and quasi-equilibrium of sand dunes (Venditti and Church, 2005, J. Geophysical Research, 110:F01009). We employ a curvilinear immersed boundary method along with convection-diffusion and bed-morphodynamics modules to simulate the suspended sediment and the bed-load transports respectively. The coupled simulation were carried out on a grid with more than 100 million grid nodes and simulated about 3 hours of physical time of dune evolution. The simulations provide the first complete description of sand dune formation and long-term evolution. The geometric characteristics of the simulated dunes are shown to be in excellent agreement with observed data obtained across a broad range of scales. This work was supported by NSF Grants EAR-0120914 (as part of the National Center for Earth-Surface Dynamics). Computational resources were provided by the University of Minnesota Supercomputing Institute.

  18. A general method for assessing the effects of uncertainty in individual-tree volume model predictions on large-area volume estimates with a subtropical forest illustration

    Treesearch

    Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans

    2015-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...

  19. Large eddy simulation in a rotary blood pump: Viscous shear stress computation and comparison with unsteady Reynolds-averaged Navier-Stokes simulation.

    PubMed

    Torner, Benjamin; Konnigk, Lucas; Hallier, Sebastian; Kumar, Jitendra; Witte, Matthias; Wurm, Frank-Hendrik

    2018-06-01

    Numerical flow analysis (computational fluid dynamics) in combination with the prediction of blood damage is an important procedure to investigate the hemocompatibility of a blood pump, since blood trauma due to shear stresses remains a problem in these devices. Today, the numerical damage prediction is conducted using unsteady Reynolds-averaged Navier-Stokes simulations. Investigations with large eddy simulations are rarely being performed for blood pumps. Hence, the aim of the study is to examine the viscous shear stresses of a large eddy simulation in a blood pump and compare the results with an unsteady Reynolds-averaged Navier-Stokes simulation. The simulations were carried out at two operation points of a blood pump. The flow was simulated on a 100M element mesh for the large eddy simulation and a 20M element mesh for the unsteady Reynolds-averaged Navier-Stokes simulation. As a first step, the large eddy simulation was verified by analyzing internal dissipative losses within the pump. Then, the pump characteristics and mean and turbulent viscous shear stresses were compared between the two simulation methods. The verification showed that the large eddy simulation is able to reproduce the significant portion of dissipative losses, which is a global indication that the equivalent viscous shear stresses are adequately resolved. The comparison with the unsteady Reynolds-averaged Navier-Stokes simulation revealed that the hydraulic parameters were in agreement, but differences for the shear stresses were found. The results show the potential of the large eddy simulation as a high-quality comparative case to check the suitability of a chosen Reynolds-averaged Navier-Stokes setup and turbulence model. Furthermore, the results lead to suggest that large eddy simulations are superior to unsteady Reynolds-averaged Navier-Stokes simulations when instantaneous stresses are applied for the blood damage prediction.

  20. Climate Simulations with an Isentropic Finite Volume Dynamical Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chih-Chieh; Rasch, Philip J.

    2012-04-15

    This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite volume dynamical core of the Community Atmosphere Model (CAM). Results from a 20-year climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM which both use a pressure-based ({sigma}-p) coordinate. The same physical parameterization package is employed in all three dynamical cores. The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heatmore » transport processes in the atmosphere leading to much improved atmospheric temperatures. We show that the isentropic model is very effective in reducing the long standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea-level pressure field in the northern hemisphere.« less

  1. Planning, Designing, Building, and Moving a Large Volume Maternity Service to a New Labor and Birth Unit.

    PubMed

    Thompson, Heather; Legorreta, Kimberly; Maher, Mary Ann; Lavin, Melanie M

    Our health system recognized the need to update facility space and associated technology for the labor and birth unit within our large volume perinatal service to improve the patient experience, and enhance safety, quality of care, and staff satisfaction. When an organization decides to invest $30 million dollars in a construction project such as a new labor and birth unit, many factors and considerations are involved. Financial support, planning, design, and construction phases of building a new unit are complex and therefore require strong interdisciplinary collaboration, leadership, and project management. The new labor and birth unit required nearly 3 years of planning, designing, and construction. Patient and family preferences were elicited through consumer focus groups. Multiple meetings with the administrative and nursing leadership teams, staff nurses, nurse midwives, and physicians were held to generate ideas for improvement in the new space. Involving frontline clinicians and childbearing women in the process was critical to success. The labor and birth unit moved to a new patient tower in a space that was doubled in square footage and geographically now on three separate floors. In the 6 months prior to the move, many efforts were made in our community to share our new space. The marketing strategy was very detailed and creative with ongoing input from the nursing leadership team. The nursing staff was involved in every step along the way. It was critical to have champions as workflow teams emerged. We hosted simulation drills and tested scenarios with new workflows. Move day was rehearsed with representatives of all members of the perinatal team participating. These efforts ultimately resulted in a move time of ~5 hours. Birth volumes increased 7% within the first 6 months. After 3 years in our new space, our birth volumes have risen nearly 15% and are still growing. Key processes and roles responsible for a successful build, efficient and safe move

  2. Low-Dissipation Advection Schemes Designed for Large Eddy Simulations of Hypersonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    White, Jeffrey A.; Baurle, Robert A.; Fisher, Travis C.; Quinlan, Jesse R.; Black, William S.

    2012-01-01

    The 2nd-order upwind inviscid flux scheme implemented in the multi-block, structured grid, cell centered, finite volume, high-speed reacting flow code VULCAN has been modified to reduce numerical dissipation. This modification was motivated by the desire to improve the codes ability to perform large eddy simulations. The reduction in dissipation was accomplished through a hybridization of non-dissipative and dissipative discontinuity-capturing advection schemes that reduces numerical dissipation while maintaining the ability to capture shocks. A methodology for constructing hybrid-advection schemes that blends nondissipative fluxes consisting of linear combinations of divergence and product rule forms discretized using 4th-order symmetric operators, with dissipative, 3rd or 4th-order reconstruction based upwind flux schemes was developed and implemented. A series of benchmark problems with increasing spatial and fluid dynamical complexity were utilized to examine the ability of the candidate schemes to resolve and propagate structures typical of turbulent flow, their discontinuity capturing capability and their robustness. A realistic geometry typical of a high-speed propulsion system flowpath was computed using the most promising of the examined schemes and was compared with available experimental data to demonstrate simulation fidelity.

  3. Large eddy simulation of hydrodynamic cavitation

    NASA Astrophysics Data System (ADS)

    Bhatt, Mrugank; Mahesh, Krishnan

    2017-11-01

    Large eddy simulation is used to study sheet to cloud cavitation over a wedge. The mixture of water and water vapor is represented using a homogeneous mixture model. Compressible Navier-Stokes equations for mixture quantities along with transport equation for vapor mass fraction employing finite rate mass transfer between the two phases, are solved using the numerical method of Gnanaskandan and Mahesh. The method is implemented on unstructured grid with parallel MPI capabilities. Flow over a wedge is simulated at Re = 200 , 000 and the performance of the homogeneous mixture model is analyzed in predicting different regimes of sheet to cloud cavitation; namely, incipient, transitory and periodic, as observed in the experimental investigation of Harish et al.. This work is supported by the Office of Naval Research.

  4. Simulation studies of vestibular macular afferent-discharge patterns using a new, quasi-3-D finite volume method

    NASA Technical Reports Server (NTRS)

    Ross, M. D.; Linton, S. W.; Parnas, B. R.

    2000-01-01

    A quasi-three-dimensional finite-volume numerical simulator was developed to study passive voltage spread in vestibular macular afferents. The method, borrowed from computational fluid dynamics, discretizes events transpiring in small volumes over time. The afferent simulated had three calyces with processes. The number of processes and synapses, and direction and timing of synapse activation, were varied. Simultaneous synapse activation resulted in shortest latency, while directional activation (proximal to distal and distal to proximal) yielded most regular discharges. Color-coded visualizations showed that the simulator discretized events and demonstrated that discharge produced a distal spread of voltage from the spike initiator into the ending. The simulations indicate that directional input, morphology, and timing of synapse activation can affect discharge properties, as must also distal spread of voltage from the spike initiator. The finite volume method has generality and can be applied to more complex neurons to explore discrete synaptic effects in four dimensions.

  5. Use of cryopumps on large space simulation systems

    NASA Technical Reports Server (NTRS)

    Mccrary, L. E.

    1980-01-01

    The need for clean, oil free space simulation systems has demanded the development of large, clean pumping systems. The assurance of optically dense liquid nitrogen baffles over diffusion pumps prevents backstreaming to a large extent, but does not preclude contamination from accidents or a control failure. Turbomolecular pumps or ion pumps achieve oil free systems but are only practical for relatively small chambers. Large cryopumps were developed and checked out which do achieve clean pumping of very large chambers. These pumps can be used as the original pumping system or can be retrofitted as a replacement for existing diffusion pumps.

  6. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes.

    PubMed

    Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T

    2006-01-21

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.

  7. Traffic analysis toolbox volume XIII : integrated corridor management analysis, modeling, and simulation guide

    DOT National Transportation Integrated Search

    2017-02-01

    As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...

  8. Traffic analysis toolbox volume XIII : integrated corridor management analysis, modeling, and simulation guide.

    DOT National Transportation Integrated Search

    2017-02-01

    As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...

  9. Comparing selected morphological models of hydrated Nafion using large scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Knox, Craig K.

    Experimental elucidation of the nanoscale structure of hydrated Nafion, the most popular polymer electrolyte or proton exchange membrane (PEM) to date, and its influence on macroscopic proton conductance is particularly challenging. While it is generally agreed that hydrated Nafion is organized into distinct hydrophilic domains or clusters within a hydrophobic matrix, the geometry and length scale of these domains continues to be debated. For example, at least half a dozen different domain shapes, ranging from spheres to cylinders, have been proposed based on experimental SAXS and SANS studies. Since the characteristic length scale of these domains is believed to be ˜2 to 5 nm, very large molecular dynamics (MD) simulations are needed to accurately probe the structure and morphology of these domains, especially their connectivity and percolation phenomena at varying water content. Using classical, all-atom MD with explicit hydronium ions, simulations have been performed to study the first-ever hydrated Nafion systems that are large enough (~2 million atoms in a ˜30 nm cell) to directly observe several hydrophilic domains at the molecular level. These systems consisted of six of the most significant and relevant morphological models of Nafion to-date: (1) the cluster-channel model of Gierke, (2) the parallel cylinder model of Schmidt-Rohr, (3) the local-order model of Dreyfus, (4) the lamellar model of Litt, (5) the rod network model of Kreuer, and (6) a 'random' model, commonly used in previous simulations, that does not directly assume any particular geometry, distribution, or morphology. These simulations revealed fast intercluster bridge formation and network percolation in all of the models. Sulfonates were found inside these bridges and played a significant role in percolation. Sulfonates also strongly aggregated around and inside clusters. Cluster surfaces were analyzed to study the hydrophilic-hydrophobic interface. Interfacial area and cluster volume

  10. Toward large eddy simulation of turbulent flow over an airfoil

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon

    1993-01-01

    The flow field over an airfoil contains several distinct flow characteristics, e.g. laminar, transitional, turbulent boundary layer flow, flow separation, unstable free shear layers, and a wake. This diversity of flow regimes taxes the presently available Reynolds averaged turbulence models. Such models are generally tuned to predict a particular flow regime, and adjustments are necessary for the prediction of a different flow regime. Similar difficulties are likely to emerge when the large eddy simulation technique is applied with the widely used Smagorinsky model. This model has not been successful in correctly representing different turbulent flow fields with a single universal constant and has an incorrect near-wall behavior. Germano et al. (1991) and Ghosal, Lund & Moin have developed a new subgrid-scale model, the dynamic model, which is very promising in alleviating many of the persistent inadequacies of the Smagorinsky model: the model coefficient is computed dynamically as the calculation progresses rather than input a priori. The model has been remarkably successful in prediction of several turbulent and transitional flows. We plan to simulate turbulent flow over a '2D' airfoil using the large eddy simulation technique. Our primary objective is to assess the performance of the newly developed dynamic subgrid-scale model for computation of complex flows about aircraft components and to compare the results with those obtained using the Reynolds average approach and experiments. The present computation represents the first application of large eddy simulation to a flow of aeronautical interest and a key demonstration of the capabilities of the large eddy simulation technique.

  11. Large-eddy simulation of a spatially-evolving turbulent mixing layer

    NASA Astrophysics Data System (ADS)

    Capuano, Francesco; Catalano, Pietro; Mastellone, Andrea

    2015-11-01

    Large-eddy simulations of a spatially-evolving turbulent mixing layer have been performed. The flow conditions correspond to those of a documented experimental campaign (Delville, Appl. Sci. Res. 1994). The flow evolves downstream of a splitter plate separating two fully turbulent boundary layers, with Reθ = 2900 on the high-speed side and Reθ = 1200 on the low-speed side. The computational domain starts at the trailing edge of the splitter plate, where experimental mean velocity profiles are prescribed; white-noise perturbations are superimposed to mimic turbulent fluctuations. The fully compressible Navier-Stokes equations are solved by means of a finite-volume method implemented into the in-house code SPARK-LES. The results are mainly checked in terms of the streamwise evolution of the vorticity thickness and averaged velocity profiles. The combined effects of inflow perturbations, numerical accuracy and subgrid-scale model are discussed. It is found that excessive levels of dissipation may damp inlet fluctuations and delay the virtual origin of the turbulent mixing layer. On the other hand, non-dissipative, high-resolution computations provide results that are in much better agreement with experimental data.

  12. Chloride Content of Fluids Used for Large-Volume Resuscitation Is Associated With Reduced Survival.

    PubMed

    Sen, Ayan; Keener, Christopher M; Sileanu, Florentina E; Foldes, Emily; Clermont, Gilles; Murugan, Raghavan; Kellum, John A

    2017-02-01

    We sought to investigate if the chloride content of fluids used in resuscitation was associated with short- and long-term outcomes. We identified patients who received large-volume fluid resuscitation, defined as greater than 60 mL/kg over a 24-hour period. Chloride load was determined for each patient based on the chloride ion concentration of the fluids they received during large-volume fluid resuscitation multiplied by the volume of fluids. We compared the development of hyperchloremic acidosis, acute kidney injury, and survival among those with higher and lower chloride loads. University Medical Center. Patients admitted to ICUs from 2000 to 2008. None. Among 4,710 patients receiving large-volume fluid resuscitation, hyperchloremic acidosis was documented in 523 (11%). Crude rates of hyperchloremic acidosis, acute kidney injury, and hospital mortality all increased significantly as chloride load increased (p < 0.001). However, chloride load was no longer associated with hyperchloremic acidosis or acute kidney injury after controlling for total fluids, age, and baseline severity. Conversely, each 100 mEq increase in chloride load was associated with a 5.5% increase in the hazard of death even after controlling for total fluid volume, age, and severity (p = 0.0015) over 1 year. Chloride load is associated with significant adverse effects on survival out to 1 year even after controlling for total fluid load, age, and baseline severity of illness. However, the relationship between chloride load and development of hyperchloremic acidosis or acute kidney injury is less clear, and further research is needed to elucidate the mechanisms underlying the adverse effects of chloride load on survival.

  13. Large-eddy simulation of turbulent cavitating flow in a micro channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Hickel, Stefan; Schmidt, Steffen J.

    2014-08-15

    Large-eddy simulations (LES) of cavitating flow of a Diesel-fuel-like fluid in a generic throttle geometry are presented. Two-phase regions are modeled by a parameter-free thermodynamic equilibrium mixture model, and compressibility of the liquid and the liquid-vapor mixture is taken into account. The Adaptive Local Deconvolution Method (ALDM), adapted for cavitating flows, is employed for discretizing the convective terms of the Navier-Stokes equations for the homogeneous mixture. ALDM is a finite-volume-based implicit LES approach that merges physically motivated turbulence modeling and numerical discretization. Validation of the numerical method is performed for a cavitating turbulent mixing layer. Comparisons with experimental data ofmore » the throttle flow at two different operating conditions are presented. The LES with the employed cavitation modeling predicts relevant flow and cavitation features accurately within the uncertainty range of the experiment. The turbulence structure of the flow is further analyzed with an emphasis on the interaction between cavitation and coherent motion, and on the statistically averaged-flow evolution.« less

  14. Idealised large-eddy-simulation of thermally driven flows over an isolated mountain range with multiple ridges

    NASA Astrophysics Data System (ADS)

    Lang, Moritz N.; Gohm, Alexander; Wagner, Johannes S.; Leukauf, Daniel; Posch, Christian

    2014-05-01

    Two dimensional idealised large-eddy-simulations are performed using the WRF model to investigate thermally driven flows during the daytime over complex terrain. Both the upslope flows and the temporal evolution of the boundary layer structure are studied with a constant surface heat flux forcing of 150 W m-2. In order to distinguish between different heating processes the flow is Reynold decomposed into its mean and turbulent part. The heating processes associated with the mean flow are a cooling through cold-air advection along the slopes and subsidence warming within the valleys. The turbulent component causes bottom-up heating near the ground leading to a convective boundary layer (CBL) inside the valleys. Overshooting potentially colder thermals cool the stably stratified valley atmosphere above the CBL. Compared to recent investigations (Schmidli 2013, J. Atmos. Sci., Vol. 70, No. 12: pp. 4041-4066; Wagner et al. 2014, manuscript submitted to Mon. Wea. Rev.), which used an idealised topography with two parallel mountain crests separated by a straight valley, this project focuses on multiple, periodic ridges and valleys within an isolated mountain range. The impact of different numbers of ridges on the flow structure is compared with the sinusoidal envelope-topography. The present simulations show an interaction between the smaller-scale upslope winds within the different valleys and the large-scale flow of the superimposed mountain-plain wind circulation. Despite a smaller boundary layer air volume in the envelope case compared to the multiple ridges case the volume averaged heating rates are comparable. The reason is a stronger advection-induced cooling along the slopes and a weaker warming through subsidence at the envelope-topography compared to the mountain range with multiple ridges.

  15. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  16. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware

    PubMed Central

    Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061

  17. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  18. The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris

    2015-08-01

    Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less

  19. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nagata, K.

    2016-08-01

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.

  20. Modeling the Lyα Forest in Collisionless Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, Daniele; Oñorbe, José; Lukić, Zarija

    2016-08-11

    Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present in this paper "Iteratively Matched Statistics" (IMS), a novel method to accurately model the Lyα forest with collisionless N-body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) andmore » the power spectrum of the real-space Lyα forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N-body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Lyα forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N-body simulations with achievable mean inter-particle separations in large-volume simulations. Finally, in addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic "mock" skies for Lyα forest surveys.« less

  1. MODELING THE Ly α FOREST IN COLLISIONLESS SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, Daniele; Oñorbe, José; Hennawi, Joseph F.

    2016-08-20

    Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present “Iteratively Matched Statistics” (IMS), a novel method to accurately model the Ly α forest with collisionless N -body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) and themore » power spectrum of the real-space Ly α forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N -body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Ly α forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N -body simulations with achievable mean inter-particle separations in large-volume simulations. In addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic “mock” skies for Ly α forest surveys.« less

  2. Slug sizing/slug volume prediction, state of the art review and simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, N.E.; Kashou, S.F.

    1995-12-01

    Slug flow is a flow pattern commonly encountered in offshore multiphase flowlines. It is characterized by an alternate flow of liquid slugs and gas pockets, resulting in an unsteady hydrodynamic behavior. All important design variables, such as slug length and slug frequency, liquid holdup, and pressure drop, vary with time and this makes the prediction of slug flow characteristics both difficult and challenging. This paper reviews the state of the art methods in slug catcher sizing and slug volume predictions. In addition, history matching of measured slug flow data is performed using the OLGA transient simulator. This paper reviews themore » design factors that impact slug catcher sizing during steady state, during transient, during pigging, and during operations under a process control system. The slug tracking option of the OLGA simulator is applied to predict the slug length and the slug volume during a field operation. This paper will also comment on the performance of common empirical slug prediction correlations.« less

  3. Slug-sizing/slug-volume prediction: State of the art review and simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, N.E.; Kashou, S.F.

    1996-08-01

    Slug flow is a flow pattern commonly encountered in offshore multiphase flowlines. It is characterized by an alternate flow of liquid slugs and gas pockets, resulting in an unsteady hydrodynamic behavior. All important design variables, such as slug length and slug frequency, liquid holdup, and pressure drop, vary with time and this makes the prediction of slug flow characteristics both difficult and challenging. This paper reviews the state of the art methods in slug-catcher sizing and slug-volume predictions. In addition, history matching of measured slug flow data is performed using the OLGA transient simulator. This paper reviews the design factorsmore » that impact slug-catcher sizing during steady state, during transient, during pigging, and during operations under a process-control system. The slug-tracking option of the simulator is applied to predict the slug length and the slug volume during a field operation. This paper will also comment on the performance of common empirical slug-prediction correlations.« less

  4. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.

    1990-01-01

    This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.

  5. Large-eddy simulation using the finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulencemore » modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.« less

  6. Large-eddy simulation of nitrogen injection at trans- and supercritical conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, Hagen; Pfitzner, Michael; Niedermeier, Christoph A.

    2016-01-15

    Large-eddy simulations (LESs) of cryogenic nitrogen injection into a warm environment at supercritical pressure are performed and real-gas thermodynamics models and subgrid-scale (SGS) turbulence models are evaluated. The comparison of different SGS models — the Smagorinsky model, the Vreman model, and the adaptive local deconvolution method — shows that the representation of turbulence on the resolved scales has a notable effect on the location of jet break-up, whereas the particular modeling of unresolved scales is less important for the overall mean flow field evolution. More important are the models for the fluid’s thermodynamic state. The injected fluid is either inmore » a supercritical or in a transcritical state and undergoes a pseudo-boiling process during mixing. Such flows typically exhibit strong density gradients that delay the instability growth and can lead to a redistribution of turbulence kinetic energy from the radial to the axial flow direction. We evaluate novel volume-translation methods on the basis of the cubic Peng-Robinson equation of state in the framework of LES. At small extra computational cost, their application considerably improves the simulation results compared to the standard formulation. Furthermore, we found that the choice of inflow temperature is crucial for the reproduction of the experimental results and that heat addition within the injector can affect the mean flow field in comparison to results with an adiabatic injector.« less

  7. Large-scale lattice-Boltzmann simulations over lambda networks

    NASA Astrophysics Data System (ADS)

    Saksena, R.; Coveney, P. V.; Pinning, R.; Booth, S.

    Amphiphilic molecules are of immense industrial importance, mainly due to their tendency to align at interfaces in a solution of immiscible species, e.g., oil and water, thereby reducing surface tension. Depending on the concentration of amphiphiles in the solution, they may assemble into a variety of morphologies, such as lamellae, micelles, sponge and cubic bicontinuous structures exhibiting non-trivial rheological properties. The main objective of this work is to study the rheological properties of very large, defect-containing gyroidal systems (of up to 10243 lattice sites) using the lattice-Boltzmann method. Memory requirements for the simulation of such large lattices exceed that available to us on most supercomputers and so we use MPICH-G2/MPIg to investigate geographically distributed domain decomposition simulations across HPCx in the UK and TeraGrid in the US. Use of MPICH-G2/MPIg requires the port-forwarder to work with the grid middleware on HPCx. Data from the simulations is streamed to a high performance visualisation resource at UCL (London) for rendering and visualisation. Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project March 26-28 2007 The George Hotel, Edinburgh, UK

  8. Using Large Signal Code TESLA for Wide Band Klystron Simulations

    DTIC Science & Technology

    2006-04-01

    tuning procedure TESLA simulates of high power klystron [3]. accurately actual eigenmodes of the structure as a solution Wide band klystrons very often...on band klystrons with two-gap two-mode resonators. The decomposition of simulation region into an external results of TESLA simulations for NRL S ...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP022454 TITLE: Using Large Signal Code TESLA for Wide Band Klystron

  9. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  10. Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.

    1977-08-01

    The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the designmore » of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.« less

  11. Simulation of elution profiles in liquid chromatography - II: Investigation of injection volume overload under gradient elution conditions applied to second dimension separations in two-dimensional liquid chromatography.

    PubMed

    Stoll, Dwight R; Sajulga, Ray W; Voigt, Bryan N; Larson, Eli J; Jeong, Lena N; Rutan, Sarah C

    2017-11-10

    An important research direction in the continued development of two-dimensional liquid chromatography (2D-LC) is to improve the detection sensitivity of the method. This is especially important in applications where injection of large volumes of effluent from the first dimension ( 1 D) column into the second dimension ( 2 D) column leads to severe 2 D peak broadening and peak shape distortion. For example, this is common when coupling two reversed-phase columns and the organic solvent content of the 1 D mobile phase overwhelms the 2 D column with each injection of 1 D effluent, leading to low resolution in the second dimension. In a previous study we validated a simulation approach based on the Craig distribution model and adapted from the work of Czok and Guiochon [1] that enabled accurate simulation of simple isocratic and gradient separations with very small injection volumes, and isocratic separations with mismatched injection and mobile phase solvents [2]. In the present study we have extended this simulation approach to simulate separations relevant to 2D-LC. Specifically, we have focused on simulating 2 D separations where gradient elution conditions are used, there is mismatch between the sample solvent and the starting point in the gradient elution program, injection volumes approach or even exceed the dead volume of the 2 D column, and the extent of sample loop filling is varied. To validate this simulation we have compared results from simulations and experiments for 101 different conditions, including variation in injection volume (0.4-80μL), loop filling level (25-100%), and degree of mismatch between sample organic solvent and the starting point in the gradient elution program (-20 to +20% ACN). We find that that the simulation is accurate enough (median errors in retention time and peak width of -1.0 and -4.9%, without corrections for extra-column dispersion) to be useful in guiding optimization of 2D-LC separations. However, this requires that real

  12. A large eddy lattice Boltzmann simulation of magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Flint, Christopher; Vahala, George

    2018-02-01

    Large eddy simulations (LES) of a lattice Boltzmann magnetohydrodynamic (LB-MHD) model are performed for the unstable magnetized Kelvin-Helmholtz jet instability. This algorithm is an extension of Ansumali et al. [1] to MHD in which one performs first an expansion in the filter width on the kinetic equations followed by the usual low Knudsen number expansion. These two perturbation operations do not commute. Closure is achieved by invoking the physical constraint that subgrid effects occur at transport time scales. The simulations are in very good agreement with direct numerical simulations.

  13. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  14. Publicly Releasing a Large Simulation Dataset with NDS Labs

    NASA Astrophysics Data System (ADS)

    Goldbaum, Nathan

    2016-03-01

    Optimally, all publicly funded research should be accompanied by the tools, code, and data necessary to fully reproduce the analysis performed in journal articles describing the research. This ideal can be difficult to attain, particularly when dealing with large (>10 TB) simulation datasets. In this lightning talk, we describe the process of publicly releasing a large simulation dataset to accompany the submission of a journal article. The simulation was performed using Enzo, an open source, community-developed N-body/hydrodynamics code and was analyzed using a wide range of community- developed tools in the scientific Python ecosystem. Although the simulation was performed and analyzed using an ecosystem of sustainably developed tools, we enable sustainable science using our data by making it publicly available. Combining the data release with the NDS Labs infrastructure allows a substantial amount of added value, including web-based access to analysis and visualization using the yt analysis package through an IPython notebook interface. In addition, we are able to accompany the paper submission to the arXiv preprint server with links to the raw simulation data as well as interactive real-time data visualizations that readers can explore on their own or share with colleagues during journal club discussions. It is our hope that the value added by these services will substantially increase the impact and readership of the paper.

  15. Parsing partial molar volumes of small molecules: a molecular dynamics study.

    PubMed

    Patel, Nisha; Dubins, David N; Pomès, Régis; Chalikian, Tigran V

    2011-04-28

    We used molecular dynamics (MD) simulations in conjunction with the Kirkwood-Buff theory to compute the partial molar volumes for a number of small solutes of various chemical natures. We repeated our computations using modified pair potentials, first, in the absence of the Coulombic term and, second, in the absence of the Coulombic and the attractive Lennard-Jones terms. Comparison of our results with experimental data and the volumetric results of Monte Carlo simulation with hard sphere potentials and scaled particle theory-based computations led us to conclude that, for small solutes, the partial molar volume computed with the Lennard-Jones potential in the absence of the Coulombic term nearly coincides with the cavity volume. On the other hand, MD simulations carried out with the pair interaction potentials containing only the repulsive Lennard-Jones term produce unrealistically large partial molar volumes of solutes that are close to their excluded volumes. Our simulation results are in good agreement with the reported schemes for parsing partial molar volume data on small solutes. In particular, our determined interaction volumes() and the thickness of the thermal volume for individual compounds are in good agreement with empirical estimates. This work is the first computational study that supports and lends credence to the practical algorithms of parsing partial molar volume data that are currently in use for molecular interpretations of volumetric data.

  16. GCM Simulation of the Large-scale North American Monsoon Including Water Vapor Tracer Diagnostics

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Walker, Gregory; Schubert, Siegfried D.; Sud, Yogesh; Atlas, Robert M. (Technical Monitor)

    2001-01-01

    The geographic sources of water for the large-scale North American monsoon in a GCM are diagnosed using passive constituent tracers of regional water'sources (Water Vapor Tracers, WVT). The NASA Data Assimilation Office Finite Volume (FV) GCM was used to produce a 10-year simulation (1984 through 1993) including observed sea surface temperature. Regional and global WVT sources were defined to delineate the surface origin of water for precipitation in and around the North American i'vionsoon. The evolution of the mean annual cycle and the interannual variations of the monsoonal circulation will be discussed. Of special concern are the relative contributions of the local source (precipitation recycling) and remote sources of water vapor to the annual cycle and the interannual variation of warm season precipitation. The relationships between soil water, surface evaporation, precipitation and precipitation recycling will be evaluated.

  17. GCM Simulation of the Large-Scale North American Monsoon Including Water Vapor Tracer Diagnostics

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Walker, Gregory; Schubert, Siegfried D.; Sud, Yogesh; Atlas, Robert M. (Technical Monitor)

    2002-01-01

    The geographic sources of water for the large scale North American monsoon in a GCM (General Circulation Model) are diagnosed using passive constituent tracers of regional water sources (Water Vapor Tracers, WVT). The NASA Data Assimilation Office Finite Volume (FV) GCM was used to produce a 10-year simulation (1984 through 1993) including observed sea surface temperature. Regional and global WVT sources were defined to delineate the surface origin of water for precipitation in and around the North American Monsoon. The evolution of the mean annual cycle and the interannual variations of the monsoonal circulation will be discussed. Of special concern are the relative contributions of the local source (precipitation recycling) and remote sources of water vapor to the annual cycle and the interannual variation of monsoonal precipitation. The relationships between soil water, surface evaporation, precipitation and precipitation recycling will be evaluated.

  18. Architectural Large Constructed Environment. Modeling and Interaction Using Dynamic Simulations

    NASA Astrophysics Data System (ADS)

    Fiamma, P.

    2011-09-01

    How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.

  19. A large-eddy simulation study of wake propagation and power production in an array of tidal-current turbines.

    PubMed

    Churchfield, Matthew J; Li, Ye; Moriarty, Patrick J

    2013-02-28

    This paper presents our initial work in performing large-eddy simulations of tidal turbine array flows. First, a horizontally periodic precursor simulation is performed to create turbulent flow data. Then those data are used as inflow into a tidal turbine array two rows deep and infinitely wide. The turbines are modelled using rotating actuator lines, and the finite-volume method is used to solve the governing equations. In studying the wakes created by the turbines, we observed that the vertical shear of the inflow combined with wake rotation causes lateral wake asymmetry. Also, various turbine configurations are simulated, and the total power production relative to isolated turbines is examined. We found that staggering consecutive rows of turbines in the simulated configurations allows the greatest efficiency using the least downstream row spacing. Counter-rotating consecutive downstream turbines in a non-staggered array shows a small benefit. This work has identified areas for improvement. For example, using a larger precursor domain would better capture elongated turbulent structures, and including salinity and temperature equations would account for density stratification and its effect on turbulence. Additionally, the wall shear stress modelling could be improved, and more array configurations could be examined.

  20. Simulating the large-scale structure of HI intensity maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 2048{sup 3} particles (particle mass 1.6 × 10{sup 11} M{sub ⊙} / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (10{sup 8} M{sub ⊙} / h < M{sub halo} < 10{sup 13} M{sub ⊙} / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 ∼< z ∼< 0.9 in redshift bins of width Δ z ≈ 0.05 andmore » cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.« less

  1. Characterization of Large Volume CLYC Scintillators for Nuclear Security Applications

    NASA Astrophysics Data System (ADS)

    Soundara-Pandian, Lakshmi; Tower, J.; Hines, C.; O'Dougherty, P.; Glodo, J.; Shah, K.

    2017-07-01

    We report on our development of large volume Cs2LiYCl6 (CLYC) detectors for nuclear security applications. Three-inch diameter boules have been grown and 3-in right cylinders have been fabricated. Crystals containing either >95% 6Li or >99% 7Li have been grown for applications specific to thermal or fast neutron detection, respectively. We evaluated their gamma and neutron detection properties and the performance is as good as small size crystals. Gamma and neutron efficiencies were measured for large crystals and compared with smaller size crystals. With their excellent performance characteristics, and the ability to detect fast neutrons, CLYC detectors are excellent triple-mode scintillators for use in handheld and backpack instruments for nuclear security applications.

  2. A Novel Technique for Endovascular Removal of Large Volume Right Atrial Tumor Thrombus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nickel, Barbara, E-mail: nickel.ba@gmail.com; McClure, Timothy, E-mail: tmcclure@gmail.com; Moriarty, John, E-mail: jmoriarty@mednet.ucla.edu

    Venous thromboembolic disease is a significant cause of morbidity and mortality, particularly in the setting of large volume pulmonary embolism. Thrombolytic therapy has been shown to be a successful treatment modality; however, its use somewhat limited due to the risk of hemorrhage and potential for distal embolization in the setting of large mobile thrombi. In patients where either thrombolysis is contraindicated or unsuccessful, and conventional therapies prove inadequate, surgical thrombectomy may be considered. We present a case of percutaneous endovascular extraction of a large mobile mass extending from the inferior vena cava into the right atrium using the Angiovac device,more » a venovenous bypass system designed for high-volume aspiration of undesired endovascular material. Standard endovascular methods for removal of cancer-associated thrombus, such as catheter-directed lysis, maceration, and exclusion, may prove inadequate in the setting of underlying tumor thrombus. Where conventional endovascular methods either fail or are unsuitable, endovascular thrombectomy with the Angiovac device may be a useful and safe minimally invasive alternative to open resection.« less

  3. Simulating the interaction of the heliosphere with the local interstellar medium: MHD results from a finite volume approach, first bidimensional results

    NASA Technical Reports Server (NTRS)

    Chanteur, G.; Khanfir, R.

    1995-01-01

    We have designed a full compressible MHD code working on unstructured meshes in order to be able to compute accurately sharp structures embedded in large scale simulations. The code is based on a finite volume method making use of a kinetic flux splitting. A bidimensional version of the code has been used to simulate the interaction of a moving interstellar medium, magnetized or unmagnetized with a rotating and magnetized heliopspheric plasma source. Being aware that these computations are not realistic due to the restriction to two dimensions, we present it to demonstrate the ability of this new code to handle this problem. An axisymetric version, now under development, will be operational in a few months. Ultimately we plan to run a full 3d version.

  4. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less

  5. Design and simulation of betavoltaic battery using large-grain polysilicon.

    PubMed

    Yao, Shulin; Song, Zijun; Wang, Xiang; San, Haisheng; Yu, Yuxi

    2012-10-01

    In this paper, we present the design and simulation of a p-n junction betavoltaic battery based on large-grain polysilicon. By the Monte Carlo simulation, the average penetration depth were obtained, according to which the optimal depletion region width was designed. The carriers transport model of large-grain polysilicon is used to determine the diffusion length of minority carrier. By optimizing the doping concentration, the maximum power conversion efficiency can be achieved to be 0.90% with a 10 mCi/cm(2) Ni-63 source radiation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Coupling of Large Eddy Simulations with Meteorological Models to simulate Methane Leaks from Natural Gas Storage Facilities

    NASA Astrophysics Data System (ADS)

    Prasad, K.

    2017-12-01

    Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and

  7. Large-eddy simulation of flow past a circular cylinder

    NASA Technical Reports Server (NTRS)

    Mittal, R.

    1995-01-01

    Some of the most challenging applications of large-eddy simulation are those in complex geometries where spectral methods are of limited use. For such applications more conventional methods such as finite difference or finite element have to be used. However, it has become clear in recent years that dissipative numerical schemes which are routinely used in viscous flow simulations are not good candidates for use in LES of turbulent flows. Except in cases where the flow is extremely well resolved, it has been found that upwind schemes tend to damp out a significant portion of the small scales that can be resolved on the grid. Furthermore, it has been found that even specially designed higher-order upwind schemes that have been used successfully in the direct numerical simulation of turbulent flows produce too much dissipation when used in conjunction with large-eddy simulation. The objective of the current study is to perform a LES of incompressible flow past a circular cylinder at a Reynolds number of 3900 using a solver which employs an energy-conservative second-order central difference scheme for spatial discretization and compare the results obtained with those of Beaudan & Moin (1994) and with the experiments in order to assess the performance of the central scheme for this relatively complex geometry.

  8. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.

    1991-01-01

    This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.

  9. Large eddy simulation of incompressible turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Moin, P.; Reynolds, W. C.; Ferziger, J. H.

    1978-01-01

    The three-dimensional, time-dependent primitive equations of motion were numerically integrated for the case of turbulent channel flow. A partially implicit numerical method was developed. An important feature of this scheme is that the equation of continuity is solved directly. The residual field motions were simulated through an eddy viscosity model, while the large-scale field was obtained directly from the solution of the governing equations. An important portion of the initial velocity field was obtained from the solution of the linearized Navier-Stokes equations. The pseudospectral method was used for numerical differentiation in the horizontal directions, and second-order finite-difference schemes were used in the direction normal to the walls. The large eddy simulation technique is capable of reproducing some of the important features of wall-bounded turbulent flows. The resolvable portions of the root-mean square wall pressure fluctuations, pressure velocity-gradient correlations, and velocity pressure-gradient correlations are documented.

  10. Volcanic stratigraphy of large-volume silicic pyroclastic eruptions during Oligocene Afro-Arabian flood volcanism in Yemen

    NASA Astrophysics Data System (ADS)

    Peate, Ingrid Ukstins; Baker, Joel A.; Al-Kadasi, Mohamed; Al-Subbary, Abdulkarim; Knight, Kim B.; Riisager, Peter; Thirlwall, Matthew F.; Peate, David W.; Renne, Paul R.; Menzies, Martin A.

    2005-12-01

    A new stratigraphy for bimodal Oligocene flood volcanism that forms the volcanic plateau of northern Yemen is presented based on detailed field observations, petrography and geochemical correlations. The >1 km thick volcanic pile is divided into three phases of volcanism: a main basaltic stage (31 to 29.7 Ma), a main silicic stage (29.7 to 29.5 Ma), and a stage of upper bimodal volcanism (29.5 to 27.7 Ma). Eight large-volume silicic pyroclastic eruptive units are traceable throughout northern Yemen, and some units can be correlated with silicic eruptive units in the Ethiopian Traps and to tephra layers in the Indian Ocean. The silicic units comprise pyroclastic density current and fall deposits and a caldera-collapse breccia, and they display textures that unequivocally identify them as primary pyroclastic deposits: basal vitrophyres, eutaxitic fabrics, glass shards, vitroclastic ash matrices and accretionary lapilli. Individual pyroclastic eruptions have preserved on-land volumes of up to ˜850 km3. The largest units have associated co-ignimbrite plume ash fall deposits with dispersal areas >1×107 km2 and estimated maximum total volumes of up to 5,000 km3, which provide accurate and precisely dated marker horizons that can be used to link litho-, bio- and magnetostratigraphy studies. There is a marked change in eruption style of silicic units with time, from initial large-volume explosive pyroclastic eruptions producing ignimbrites and near-globally distributed tuffs, to smaller volume (<50 km3) mixed effusive-explosive eruptions emplacing silicic lavas intercalated with tuffs and ignimbrites. Although eruption volumes decrease by an order of magnitude from the first stage to the last, eruption intervals within each phase remain broadly similar. These changes may reflect the initiation of continental rifting and the transition from pre-break-up thick, stable crust supporting large-volume magma chambers, to syn-rift actively thinning crust hosting small-volume

  11. Design, simulation, and optimization of an RGB polarization independent transmission volume hologram

    NASA Astrophysics Data System (ADS)

    Mahamat, Adoum Hassan

    Volume phase holographic (VPH) gratings have been designed for use in many areas of science and technology such as optical communication, medical imaging, spectroscopy and astronomy. The goal of this dissertation is to design a volume phase holographic grating that provides diffraction efficiencies of at least 70% for the entire visible wavelengths and higher than 90% for red, green, and blue light when the incident light is unpolarized. First, the complete design, simulation and optimization of the volume hologram are presented. The optimization is done using a Monte Carlo analysis to solve for the index modulation needed to provide higher diffraction efficiencies. The solutions are determined by solving the diffraction efficiency equations determined by Kogelnik's two wave coupled-wave theory. The hologram is further optimized using the rigorous coupled-wave analysis to correct for effects of absorption omitted by Kogelnik's method. Second, the fabrication or recording process of the volume hologram is described in detail. The active region of the volume hologram is created by interference of two coherent beams within the thin film. Third, the experimental set up and measurement of some properties including the diffraction efficiencies of the volume hologram, and the thickness of the active region are conducted. Fourth, the polarimetric response of the volume hologram is investigated. The polarization study is developed to provide insight into the effect of the refractive index modulation onto the polarization state and diffraction efficiency of incident light.

  12. Large Eddy Simulation of wind turbine wakes: detailed comparisons of two codes focusing on effects of numerics and subgrid modeling

    NASA Astrophysics Data System (ADS)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-01

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  13. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  14. Lysine production from methanol at 50 degrees C using Bacillus methanolicus: Modeling volume control, lysine concentration, and productivity using a three-phase continuous simulation.

    PubMed

    Lee, G H; Hur, W; Bremmon, C E; Flickinger, M C

    1996-03-20

    A simulation was developed based on experimental data obtained in a 14-L reactor to predict the growth and L-lysine accumulation kinetics, and change in volume of a large-scale (250-m(3)) Bacillus methanolicus methanol-based process. Homoserine auxotrophs of B. methanolicus MGA3 are unique methylotrophs because of the ability to secrete lysine during aerobic growth and threonine starvation at 50 degrees C. Dissolved methanol (100 mM), pH, dissolved oxygen tension (0.063 atm), and threonine levels were controlled to obtain threonine-limited conditions and high-cell density (25 g dry cell weight/L) in a 14-L reactor. As a fed-batch process, the additions of neat methanol (fed on demand), threonine, and other nutrients cause the volume of the fermentation to increase and the final lysine concentration to decrease. In addition, water produced as a result of methanol metabolism contributes to the increase in the volume of the reactor. A three-phase approach was used to predict the rate of change of culture volume based on carbon dioxide production and methanol consumption. This model was used for the evaluation of volume control strategies to optimize lysine productivity. A constant volume reactor process with variable feeding and continuous removal of broth and cells (VF(cstr)) resulted in higher lysine productivity than a fed-batch process without volume control. This model predicts the variation in productivity of lysine with changes in growth and in specific lysine productivity. Simple modifications of the model allows one to investigate other high-lysine-secreting strains with different growth and lysine productivity characteristics. Strain NOA2#13A5-2 which secretes lysine and other end-products were modeled using both growth and non-growth-associated lysine productivity. A modified version of this model was used to simulate the change in culture volume of another L-lysine producing mutant (NOA2#13A52-8A66) with reduced secretion of end-products. The modified

  15. SUSY’s Ladder: Reframing sequestering at Large Volume

    DOE PAGES

    Reece, Matthew; Xue, Wei

    2016-04-07

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague othermore » supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.« less

  16. Generation of a large volume of clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles for cell culture studies

    PubMed Central

    Ingham, Eileen; Fisher, John; Tipper, Joanne L

    2014-01-01

    It has recently been shown that the wear of ultra-high-molecular-weight polyethylene in hip and knee prostheses leads to the generation of nanometre-sized particles, in addition to micron-sized particles. The biological activity of nanometre-sized ultra-high-molecular-weight polyethylene wear particles has not, however, previously been studied due to difficulties in generating sufficient volumes of nanometre-sized ultra-high-molecular-weight polyethylene wear particles suitable for cell culture studies. In this study, wear simulation methods were investigated to generate a large volume of endotoxin-free clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles. Both single-station and six-station multidirectional pin-on-plate wear simulators were used to generate ultra-high-molecular-weight polyethylene wear particles under sterile and non-sterile conditions. Microbial contamination and endotoxin levels in the lubricants were determined. The results indicated that microbial contamination was absent and endotoxin levels were low and within acceptable limits for the pharmaceutical industry, when a six-station pin-on-plate wear simulator was used to generate ultra-high-molecular-weight polyethylene wear particles in a non-sterile environment. Different pore-sized polycarbonate filters were investigated to isolate nanometre-sized ultra-high-molecular-weight polyethylene wear particles from the wear test lubricants. The use of the filter sequence of 10, 1, 0.1, 0.1 and 0.015 µm pore sizes allowed successful isolation of ultra-high-molecular-weight polyethylene wear particles with a size range of < 100 nm, which was suitable for cell culture studies. PMID:24658586

  17. Generation of a large volume of clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles for cell culture studies.

    PubMed

    Liu, Aiqin; Ingham, Eileen; Fisher, John; Tipper, Joanne L

    2014-04-01

    It has recently been shown that the wear of ultra-high-molecular-weight polyethylene in hip and knee prostheses leads to the generation of nanometre-sized particles, in addition to micron-sized particles. The biological activity of nanometre-sized ultra-high-molecular-weight polyethylene wear particles has not, however, previously been studied due to difficulties in generating sufficient volumes of nanometre-sized ultra-high-molecular-weight polyethylene wear particles suitable for cell culture studies. In this study, wear simulation methods were investigated to generate a large volume of endotoxin-free clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles. Both single-station and six-station multidirectional pin-on-plate wear simulators were used to generate ultra-high-molecular-weight polyethylene wear particles under sterile and non-sterile conditions. Microbial contamination and endotoxin levels in the lubricants were determined. The results indicated that microbial contamination was absent and endotoxin levels were low and within acceptable limits for the pharmaceutical industry, when a six-station pin-on-plate wear simulator was used to generate ultra-high-molecular-weight polyethylene wear particles in a non-sterile environment. Different pore-sized polycarbonate filters were investigated to isolate nanometre-sized ultra-high-molecular-weight polyethylene wear particles from the wear test lubricants. The use of the filter sequence of 10, 1, 0.1, 0.1 and 0.015 µm pore sizes allowed successful isolation of ultra-high-molecular-weight polyethylene wear particles with a size range of < 100 nm, which was suitable for cell culture studies.

  18. Development of a Plastic Embedding Method for Large-Volume and Fluorescent-Protein-Expressing Tissues

    PubMed Central

    Yang, Zhongqin; Hu, Bihe; Zhang, Yuhui; Luo, Qingming; Gong, Hui

    2013-01-01

    Fluorescent proteins serve as important biomarkers for visualizing both subcellular organelles in living cells and structural and functional details in large-volume tissues or organs. However, current techniques for plastic embedding are limited in their ability to preserve fluorescence while remaining suitable for micro-optical sectioning tomography of large-volume samples. In this study, we quantitatively evaluated the fluorescence preservation and penetration time of several commonly used resins in a Thy1-eYFP-H transgenic whole mouse brain, including glycol methacrylate (GMA), LR White, hydroxypropyl methacrylate (HPMA) and Unicryl. We found that HMPA embedding doubled the eYFP fluorescence intensity but required long durations of incubation for whole brain penetration. GMA, Unicryl and LR White each penetrated the brain rapidly but also led to variable quenching of eYFP fluorescence. Among the fast-penetrating resins, GMA preserved fluorescence better than LR White and Unicryl. We found that we could optimize the GMA formulation by reducing the polymerization temperature, removing 4-methoxyphenol and adjusting the pH of the resin solution to be alkaline. By optimizing the GMA formulation, we increased percentage of eYFP fluorescence preservation in GMA-embedded brains nearly two-fold. These results suggest that modified GMA is suitable for embedding large-volume tissues such as whole mouse brain and provide a novel approach for visualizing brain-wide networks. PMID:23577174

  19. Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator

    NASA Astrophysics Data System (ADS)

    Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.

    2018-02-01

    Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.

  20. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  1. Preparing for Large-Force Exercises with Distributed Simulation: A Panel Presentation

    DTIC Science & Technology

    2010-07-01

    Preparing for Large Force Exercises with Distributed Simulation: A Panel Presentation Peter Crane, Winston Bennett, Michael France Air Force...used distributed simulation training to complement live-fly exercises to prepare for LFEs. In this panel presentation , the speakers will describe... presentations on how detailed analysis of training needs is necessary to structure simulator scenarios and how future training exercises could be made more

  2. Large-eddy simulation of a boundary layer with concave streamwise curvature

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.

    1994-01-01

    Turbulence modeling continues to be one of the most difficult problems in fluid mechanics. Existing prediction methods are well developed for certain classes of simple equilibrium flows, but are still not entirely satisfactory for a large category of complex non-equilibrium flows found in engineering practice. Direct and large-eddy simulation (LES) approaches have long been believed to have great potential for the accurate prediction of difficult turbulent flows, but the associated computational cost has been prohibitive for practical problems. This remains true for direct simulation but is no longer clear for large-eddy simulation. Advances in computer hardware, numerical methods, and subgrid-scale modeling have made it possible to conduct LES for flows or practical interest at Reynolds numbers in the range of laboratory experiments. The objective of this work is to apply ES and the dynamic subgrid-scale model to the flow of a boundary layer over a concave surface.

  3. Ultrafast, sensitive and large-volume on-chip real-time PCR for the molecular diagnosis of bacterial and viral infections.

    PubMed

    Houssin, Timothée; Cramer, Jérémy; Grojsman, Rébecca; Bellahsene, Lyes; Colas, Guillaume; Moulet, Hélène; Minnella, Walter; Pannetier, Christophe; Leberre, Maël; Plecis, Adrien; Chen, Yong

    2016-04-21

    To control future infectious disease outbreaks, like the 2014 Ebola epidemic, it is necessary to develop ultrafast molecular assays enabling rapid and sensitive diagnoses. To that end, several ultrafast real-time PCR systems have been previously developed, but they present issues that hinder their wide adoption, notably regarding their sensitivity and detection volume. An ultrafast, sensitive and large-volume real-time PCR system based on microfluidic thermalization is presented herein. The method is based on the circulation of pre-heated liquids in a microfluidic chip that thermalize the PCR chamber by diffusion and ultrafast flow switches. The system can achieve up to 30 real-time PCR cycles in around 2 minutes, which makes it the fastest PCR thermalization system for regular sample volume to the best of our knowledge. After biochemical optimization, anthrax and Ebola simulating agents could be respectively detected by a real-time PCR in 7 minutes and a reverse transcription real-time PCR in 7.5 minutes. These detections are respectively 6.4 and 7.2 times faster than with an off-the-shelf apparatus, while conserving real-time PCR sample volume, efficiency, selectivity and sensitivity. The high-speed thermalization also enabled us to perform sharp melting curve analyses in only 20 s and to discriminate amplicons of different lengths by rapid real-time PCR. This real-time PCR microfluidic thermalization system is cost-effective, versatile and can be then further developed for point-of-care, multiplexed, ultrafast and highly sensitive molecular diagnoses of bacterial and viral diseases.

  4. Large eddy simulations in 2030 and beyond

    PubMed Central

    Piomelli, U

    2014-01-01

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier–Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise. PMID:25024415

  5. A computer simulation of free-volume distributions and related structural properties in a model lipid bilayer.

    PubMed Central

    Xiang, T X

    1993-01-01

    A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390

  6. Sensitivity analysis of some critical factors affecting simulated intrusion volumes during a low pressure transient event in a full-scale water distribution system.

    PubMed

    Ebacher, G; Besner, M C; Clément, B; Prévost, M

    2012-09-01

    Intrusion events caused by transient low pressures may result in the contamination of a water distribution system (DS). This work aims at estimating the range of potential intrusion volumes that could result from a real downsurge event caused by a momentary pump shutdown. A model calibrated with transient low pressure recordings was used to simulate total intrusion volumes through leakage orifices and submerged air vacuum valves (AVVs). Four critical factors influencing intrusion volumes were varied: the external head of (untreated) water on leakage orifices, the external head of (untreated) water on submerged air vacuum valves, the leakage rate, and the diameter of AVVs' outlet orifice (represented by a multiplicative factor). Leakage orifices' head and AVVs' orifice head levels were assessed through fieldwork. Two sets of runs were generated as part of two statistically designed experiments. A first set of 81 runs was based on a complete factorial design in which each factor was varied over 3 levels. A second set of 40 runs was based on a latin hypercube design, better suited for experimental runs on a computer model. The simulations were conducted using commercially available transient analysis software. Responses, measured by total intrusion volumes, ranged from 10 to 366 L. A second degree polynomial was used to analyze the total intrusion volumes. Sensitivity analyses of both designs revealed that the relationship between the total intrusion volume and the four contributing factors is not monotonic, with the AVVs' orifice head being the most influential factor. When intrusion through both pathways occurs concurrently, interactions between the intrusion flows through leakage orifices and submerged AVVs influence intrusion volumes. When only intrusion through leakage orifices is considered, the total intrusion volume is more largely influenced by the leakage rate than by the leakage orifices' head. The latter mainly impacts the extent of the area affected by

  7. APPHi: Automated Photometry Pipeline for High Cadence Large Volume Data

    NASA Astrophysics Data System (ADS)

    Sánchez, E.; Castro, J.; Silva, J.; Hernández, J.; Reyes, M.; Hernández, B.; Alvarez, F.; García T.

    2018-04-01

    APPHi (Automated Photometry Pipeline) carries out aperture and differential photometry of TAOS-II project data. It is computationally efficient and can be used also with other astronomical wide-field image data. APPHi works with large volumes of data and handles both FITS and HDF5 formats. Due the large number of stars that the software has to handle in an enormous number of frames, it is optimized to automatically find the best value for parameters to carry out the photometry, such as mask size for aperture, size of window for extraction of a single star, and the number of counts for the threshold for detecting a faint star. Although intended to work with TAOS-II data, APPHi can analyze any set of astronomical images and is a robust and versatile tool to performing stellar aperture and differential photometry.

  8. Electro-mechanical probe positioning system for large volume plasma device

    NASA Astrophysics Data System (ADS)

    Sanyasi, A. K.; Sugandhi, R.; Srivastava, P. K.; Srivastav, Prabhakar; Awasthi, L. M.

    2018-05-01

    An automated electro-mechanical system for the positioning of plasma diagnostics has been designed and implemented in a Large Volume Plasma Device (LVPD). The system consists of 12 electro-mechanical assemblies, which are orchestrated using the Modbus communication protocol on 4-wire RS485 communications to meet the experimental requirements. Each assembly has a lead screw-based mechanical structure, Wilson feed-through-based vacuum interface, bipolar stepper motor, micro-controller-based stepper drive, and optical encoder for online positioning correction of probes. The novelty of the system lies in the orchestration of multiple drives on a single interface, fabrication and installation of the system for a large experimental device like the LVPD, in-house developed software, and adopted architectural practices. The paper discusses the design, description of hardware and software interfaces, and performance results in LVPD.

  9. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.

    PubMed

    Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J

    2017-04-12

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.

  10. The Monte Carlo simulation of the Borexino detector

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Canepa, M.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Magnozzi, M.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.

    2018-01-01

    We describe the Monte Carlo (MC) simulation of the Borexino detector and the agreement of its output with data. The Borexino MC "ab initio" simulates the energy loss of particles in all detector components and generates the resulting scintillation photons and their propagation within the liquid scintillator volume. The simulation accounts for absorption, reemission, and scattering of the optical photons and tracks them until they either are absorbed or reach the photocathode of one of the photomultiplier tubes. Photon detection is followed by a comprehensive simulation of the readout electronics response. The MC is tuned using data collected with radioactive calibration sources deployed inside and around the scintillator volume. The simulation reproduces the energy response of the detector, its uniformity within the fiducial scintillator volume relevant to neutrino physics, and the time distribution of detected photons to better than 1% between 100 keV and several MeV. The techniques developed to simulate the Borexino detector and their level of refinement are of possible interest to the neutrino community, especially for current and future large-volume liquid scintillator experiments such as Kamland-Zen, SNO+, and Juno.

  11. General-relativistic Large-eddy Simulations of Binary Neutron Star Mergers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radice, David, E-mail: dradice@astro.princeton.edu

    The flow inside remnants of binary neutron star (NS) mergers is expected to be turbulent, because of magnetohydrodynamics instability activated at scales too small to be resolved in simulations. To study the large-scale impact of these instabilities, we develop a new formalism, based on the large-eddy simulation technique, for the modeling of subgrid-scale turbulent transport in general relativity. We apply it, for the first time, to the simulation of the late-inspiral and merger of two NSs. We find that turbulence can significantly affect the structure and survival time of the merger remnant, as well as its gravitational-wave (GW) and neutrinomore » emissions. The former will be relevant for GW observation of merging NSs. The latter will affect the composition of the outflow driven by the merger and might influence its nucleosynthetic yields. The accretion rate after black hole formation is also affected. Nevertheless, we find that, for the most likely values of the turbulence mixing efficiency, these effects are relatively small and the GW signal will be affected only weakly by the turbulence. Thus, our simulations provide a first validation of all existing post-merger GW models.« less

  12. A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    1998-01-01

    An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.

  13. New material model for simulating large impacts on rocky bodies

    NASA Astrophysics Data System (ADS)

    Tonge, A.; Barnouin, O.; Ramesh, K.

    2014-07-01

    Large impact craters on an asteroid can provide insights into its internal structure. These craters can expose material from the interior of the body at the impact site [e.g., 1]; additionally, the impact sends stress waves throughout the body, which interrogate the asteroid's interior. Through a complex interplay of processes, such impacts can result in a variety of motions, the consequence of which may appear as lineaments that are exposed over all or portions of the asteroid's surface [e.g., 2,3]. While analytic, scaling, and heuristic arguments can provide some insight into general phenomena on asteroids, interpreting the results of a specific impact event, or series of events, on a specific asteroid geometry generally necessitates the use of computational approaches that can solve for the stress and displacement history resulting from an impact event. These computational approaches require a constitutive model for the material, which relates the deformation history of a small material volume to the average force on the boundary of that material volume. In this work, we present a new material model that is suitable for simulating the failure of rocky materials during impact events. This material model is similar to the model discussed in [4]. The new material model incorporates dynamic sub-scale crack interactions through a micro-mechanics-based damage model, thermodynamic effects through the use of a Mie-Gruneisen equation of state, and granular flow of the fully damaged material. The granular flow model includes dilatation resulting from the mutual interaction of small fragments of material (grains) as they are forced to slide and roll over each other and includes a P-α type porosity model to account for compaction of the granular material in a subsequent impact event. The micro-mechanics-based damage model provides a direct connection between the flaw (crack) distribution in the material and the rate-dependent strength. By connecting the rate

  14. Large-eddy simulations of a Salt Lake Valley cold-air pool

    NASA Astrophysics Data System (ADS)

    Crosman, Erik T.; Horel, John D.

    2017-09-01

    Persistent cold-air pools are often poorly forecast by mesoscale numerical weather prediction models, in part due to inadequate parameterization of planetary boundary-layer physics in stable atmospheric conditions, and also because of errors in the initialization and treatment of the model surface state. In this study, an improved numerical simulation of the 27-30 January 2011 cold-air pool in Utah's Great Salt Lake Basin is obtained using a large-eddy simulation with more realistic surface state characterization. Compared to a Weather Research and Forecasting model configuration run as a mesoscale model with a planetary boundary-layer scheme where turbulence is highly parameterized, the large-eddy simulation more accurately captured turbulent interactions between the stable boundary-layer and flow aloft. The simulations were also found to be sensitive to variations in the Great Salt Lake temperature and Salt Lake Valley snow cover, illustrating the importance of land surface state in modelling cold-air pools.

  15. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    NASA Astrophysics Data System (ADS)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  16. Diurnal fluctuations in brain volume: Statistical analyses of MRI from large populations.

    PubMed

    Nakamura, Kunio; Brown, Robert A; Narayanan, Sridar; Collins, D Louis; Arnold, Douglas L

    2015-09-01

    We investigated fluctuations in brain volume throughout the day using statistical modeling of magnetic resonance imaging (MRI) from large populations. We applied fully automated image analysis software to measure the brain parenchymal fraction (BPF), defined as the ratio of the brain parenchymal volume and intracranial volume, thus accounting for variations in head size. The MRI data came from serial scans of multiple sclerosis (MS) patients in clinical trials (n=755, 3269 scans) and from subjects participating in the Alzheimer's Disease Neuroimaging Initiative (ADNI, n=834, 6114 scans). The percent change in BPF was modeled with a linear mixed effect (LME) model, and the model was applied separately to the MS and ADNI datasets. The LME model for the MS datasets included random subject effects (intercept and slope over time) and fixed effects for the time-of-day, time from the baseline scan, and trial, which accounted for trial-related effects (for example, different inclusion criteria and imaging protocol). The model for ADNI additionally included the demographics (baseline age, sex, subject type [normal, mild cognitive impairment, or Alzheimer's disease], and interaction between subject type and time from baseline). There was a statistically significant effect of time-of-day on the BPF change in MS clinical trial datasets (-0.180 per day, that is, 0.180% of intracranial volume, p=0.019) as well as the ADNI dataset (-0.438 per day, that is, 0.438% of intracranial volume, p<0.0001), showing that the brain volume is greater in the morning. Linearly correcting the BPF values with the time-of-day reduced the required sample size to detect a 25% treatment effect (80% power and 0.05 significance level) on change in brain volume from 2 time-points over a period of 1year by 2.6%. Our results have significant implications for future brain volumetric studies, suggesting that there is a potential acquisition time bias that should be randomized or statistically controlled to

  17. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  18. Towards Large Eddy Simulation of gas turbine compressors

    NASA Astrophysics Data System (ADS)

    McMullan, W. A.; Page, G. J.

    2012-07-01

    With increasing computing power, Large Eddy Simulation could be a useful simulation tool for gas turbine axial compressor design. This paper outlines a series of simulations performed on compressor geometries, ranging from a Controlled Diffusion Cascade stator blade to the periodic sector of a stage in a 3.5 stage axial compressor. The simulation results show that LES may offer advantages over traditional RANS methods when off-design conditions are considered - flow regimes where RANS models often fail to converge. The time-dependent nature of LES permits the resolution of transient flow structures, and can elucidate new mechanisms of vorticity generation on blade surfaces. It is shown that accurate LES is heavily reliant on both the near-wall mesh fidelity and the ability of the imposed inflow condition to recreate the conditions found in the reference experiment. For components embedded in a compressor this requires the generation of turbulence fluctuations at the inlet plane. A recycling method is developed that improves the quality of the flow in a single stage calculation of an axial compressor, and indicates that future developments in both the recycling technique and computing power will bring simulations of axial compressors within reach of industry in the coming years.

  19. Large area pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Kruer, Mark A. (Inventor)

    1999-01-01

    An advanced solar simulator illuminates the surface a very large solar array, such as one twenty feet by twenty feet in area, from a distance of about twenty-six feet with an essentially uniform intensity field of pulsed light of an intensity of one AMO, enabling the solar array to be efficiently tested with light that emulates the sun. Light modifiers sculpt a portion of the light generated by an electrically powered high power Xenon lamp and together with direct light from the lamp provide uniform intensity illumination throughout the solar array, compensating for the square law and cosine law reduction in direct light intensity, particularly at the corner locations of the array. At any location within the array the sum of the direct light and reflected light is essentially constant.

  20. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions

    PubMed Central

    Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.

    2017-01-01

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945

  1. A finite volume model simulation for the Broughton Archipelago, Canada

    NASA Astrophysics Data System (ADS)

    Foreman, M. G. G.; Czajko, P.; Stucchi, D. J.; Guo, M.

    A finite volume circulation model is applied to the Broughton Archipelago region of British Columbia, Canada and used to simulate the three-dimensional velocity, temperature, and salinity fields that are required by a companion model for sea lice behaviour, development, and transport. The absence of a high resolution atmospheric model necessitated the installation of nine weather stations throughout the region and the development of a simple data assimilation technique that accounts for topographic steering in interpolating/extrapolating the measured winds to the entire model domain. The circulation model is run for the period of March 13-April 3, 2008 and correlation coefficients between observed and model currents, comparisons between model and observed tidal harmonics, and root mean square differences between observed and model temperatures and salinities all showed generally good agreement. The importance of wind forcing in the near-surface circulation, differences between this simulation and one computed with another model, the effects of bathymetric smoothing on channel velocities, further improvements necessary for this model to accurately simulate conditions in May and June, and the implication of near-surface current patterns at a critical location in the 'migration corridor' of wild juvenile salmon, are also discussed.

  2. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories

    NASA Astrophysics Data System (ADS)

    Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  3. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.

    PubMed

    Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  4. Validating the simulation of large-scale parallel applications using statistical characteristics

    DOE PAGES

    Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...

    2016-03-01

    Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less

  5. Evaluation of Subgrid-Scale Models for Large Eddy Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Blaisdell, Gregory A.

    1996-01-01

    The objective of this project was to evaluate and develop subgrid-scale (SGS) turbulence models for large eddy simulations (LES) of compressible flows. During the first phase of the project results from LES using the dynamic SGS model were compared to those of direct numerical simulations (DNS) of compressible homogeneous turbulence. The second phase of the project involved implementing the dynamic SGS model in a NASA code for simulating supersonic flow over a flat-plate. The model has been successfully coded and a series of simulations has been completed. One of the major findings of the work is that numerical errors associated with the finite differencing scheme used in the code can overwhelm the SGS model and adversely affect the LES results. Attached to this overview are three submitted papers: 'Evaluation of the Dynamic Model for Simulations of Compressible Decaying Isotropic Turbulence'; 'The effect of the formulation of nonlinear terms on aliasing errors in spectral methods'; and 'Large-Eddy Simulation of a Spatially Evolving Compressible Boundary Layer Flow'.

  6. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    PubMed Central

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-01-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications. PMID:26883390

  7. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    NASA Astrophysics Data System (ADS)

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-02-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications.

  8. Evaluation of Proteins' Rotational Diffusion Coefficients from Simulations of Their Free Brownian Motion in Volume-Occupied Environments.

    PubMed

    Długosz, Maciej; Antosiewicz, Jan M

    2014-01-14

    We have investigated the rotational dynamics of hen egg white lysozyme in monodisperse aqueous solutions of concentrations up to 250 mg/mL, using a rigid-body Brownian dynamics method that accurately accounts for anisotropies of diffusing objects. We have examined the validity of the free diffusion concept in the analysis of computer simulations of volume-occupied molecular solutions. We have found that, when as the only intermolecular interaction, the excluded volume effect is considered, rotational diffusion of molecules adheres to the free diffusion model. Further, we present a method based on the exact (in the case of the free diffusion) analytic forms of autocorrelation functions of particular vectors rigidly attached to diffusing objects, which allows one to obtain from results of molecular simulations the three principal rotational diffusion coefficients characterizing rotational Brownian motion of an arbitrarily shaped rigid particle for an arbitrary concentration of crowders. We have applied this approach to trajectories resulting from Brownian dynamics simulations of hen egg white lysozyme solutions. We show that the apparent anisotropy of proteins' rotational motions increases with an increasing degree of crowding. Finally, we demonstrate that even if the hydrodynamic anisotropy of molecules is neglected and molecules are simulated using their average translational and rotational diffusion coefficients, excluded volume effects still lead to their anisotropic rotational dynamics.

  9. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    PubMed

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  10. Simulation of long-term landscape-level fuel treatment effects on large wildfires

    Treesearch

    Mark A. Finney; Rob C. Seli; Charles W. McHugh; Alan A. Ager; Bernhard Bahro; James K. Agee

    2008-01-01

    A simulation system was developed to explore how fuel treatments placed in topologically random and optimal spatial patterns affect the growth and behaviour of large fires when implemented at different rates over the course of five decades. The system consisted of a forest and fuel dynamics simulation module (Forest Vegetation Simulator, FVS), logic for deriving fuel...

  11. A Coupled Approach with Stochastic Rainfall-Runoff Simulation and Hydraulic Modeling for Extreme Flood Estimation on Large Watersheds

    NASA Astrophysics Data System (ADS)

    Paquet, E.

    2015-12-01

    The SCHADEX method aims at estimating the distribution of peak and daily discharges up to extreme quantiles. It couples a precipitation probabilistic model based on weather patterns, with a stochastic rainfall-runoff simulation process using a conceptual lumped model. It allows exploring an exhaustive set of hydrological conditions and watershed responses to intense rainfall events. Since 2006, it has been widely applied in France to about one hundred watersheds for dam spillway design, and also aboard (Norway, Canada and central Europe among others). However, its application to large watersheds (above 10 000 km²) faces some significant issues: spatial heterogeneity of rainfall and hydrological processes and flood peak damping due to hydraulic effects (flood plains, natural or man-made embankment) being the more important. This led to the development of an extreme flood simulation framework for large and heterogeneous watersheds, based on the SCHADEX method. Its main features are: Division of the large (or main) watershed into several smaller sub-watersheds, where the spatial homogeneity of the hydro-meteorological processes can reasonably be assumed, and where the hydraulic effects can be neglected. Identification of pilot watersheds where discharge data are available, thus where rainfall-runoff models can be calibrated. They will be parameters donors to non-gauged watersheds. Spatially coherent stochastic simulations for all the sub-watersheds at the daily time step. Identification of a selection of simulated events for a given return period (according to the distribution of runoff volumes at the scale of the main watershed). Generation of the complete hourly hydrographs at each of the sub-watersheds outlets. Routing to the main outlet with hydraulic 1D or 2D models. The presentation will be illustrated with the case-study of the Isère watershed (9981 km), a French snow-driven watershed. The main novelties of this method will be underlined, as well as its

  12. Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The principal objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. A summary of work accomplished during the last six months is presented.

  13. Large Eddy Simulations of Severe Convection Induced Turbulence

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.

  14. Reconstructing a Large-Scale Population for Social Simulation

    NASA Astrophysics Data System (ADS)

    Fan, Zongchen; Meng, Rongqing; Ge, Yuanzheng; Qiu, Xiaogang

    The advent of social simulation has provided an opportunity to research on social systems. More and more researchers tend to describe the components of social systems in a more detailed level. Any simulation needs the support of population data to initialize and implement the simulation systems. However, it's impossible to get the data which provide full information about individuals and households. We propose a two-step method to reconstruct a large-scale population for a Chinese city according to Chinese culture. Firstly, a baseline population is generated through gathering individuals into households one by one; secondly, social relationships such as friendship are assigned to the baseline population. Through a case study, a population of 3,112,559 individuals gathered in 1,133,835 households is reconstructed for Urumqi city, and the results show that the generated data can respect the real data quite well. The generated data can be applied to support modeling of some social phenomenon.

  15. Feasibility study for a numerical aerodynamic simulation facility. Volume 2: Hardware specifications/descriptions

    NASA Technical Reports Server (NTRS)

    Green, F. M.; Resnick, D. R.

    1979-01-01

    An FMP (Flow Model Processor) was designed for use in the Numerical Aerodynamic Simulation Facility (NASF). The NASF was developed to simulate fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The facility is applicable to studying aerodynamic and aircraft body designs. The following general topics are discussed in this volume: (1) FMP functional computer specifications; (2) FMP instruction specification; (3) standard product system components; (4) loosely coupled network (LCN) specifications/description; and (5) three appendices: performance of trunk allocation contention elimination (trace) method, LCN channel protocol and proposed LCN unified second level protocol.

  16. M.E.T.R.O.-Apex Gaming Simulation, Volume 28 (OS/360 Version).

    ERIC Educational Resources Information Center

    Michigan Univ., Ann Arbor. Environmental Simulation Lab.

    Operator's instructions and technical support materials needed for processing the M.E.T.R.O.-APEX (Air Pollution Exercise) game decisions on an IBM 360 computer are compiled in this volume. M.E.T.R.O.-APEX is a computerized college and professional level "real world" simulation of a community with urban and rural problems, industrial activities,…

  17. Large woody debris in a second-growth central Appalachian hardwood stand: volume, composition, and dynamics

    Treesearch

    M. B. Adams; T. M. Schuler; W. M. Ford; J. N. Kochenderfer

    2003-01-01

    We estimated the volume of large woody debris in a second-growth stand and evaluated the importance of periodic windstorms as disturbances in creating large woody debris. This research was conducted on a reference watershed (Watershed 4) on the Fernow Experimental Forest in West Virginia. The 38-ha stand on Watershed 4 was clearcut around 1911 and has been undisturbed...

  18. Pulsar simulations for the Fermi Large Area Telescope

    DOE PAGES

    Razzano, M.; Harding, Alice K.; Baldini, L.; ...

    2009-05-21

    Pulsars are among the prime targets for the Large Area Telescope (LAT) aboard the recently launched Fermi observatory. The LAT will study the gamma-ray Universe between 20 MeV and 300 GeV with unprecedented detail. Increasing numbers of gamma-ray pulsars are being firmly identified, yet their emission mechanisms are far from being understood. To better investigate and exploit the LAT capabilities for pulsar science, a set of new detailed pulsar simulation tools have been developed within the LAT collaboration. The structure of the pulsar simulator package ( PulsarSpectrum) is presented here. Starting from photon distributions in energy and phase obtained frommore » theoretical calculations or phenomenological considerations, gamma-rays are generated and their arrival times at the spacecraft are determined by taking into account effects such as barycentric effects and timing noise. Pulsars in binary systems also can be simulated given orbital parameters. As a result, we present how simulations can be used for generating a realistic set of gamma-rays as observed by the LAT, focusing on some case studies that show the performance of the LAT for pulsar observations.« less

  19. Large-scale ground motion simulation using GPGPU

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  20. [Effects of simulated weightlessness on pressure-volume relationships of femoral vein of New Zealand Rabbits].

    PubMed

    Yue, Yong; Yao, Yong-jie; Xie, Xiao-ping; Wang, Bing; Zhu, Qing-sheng; Wu, Xing-yu

    2002-12-01

    Objective. To observe the changes of pressure-volume relationships of rabbit femoral veins and their structural changes caused by simulated weightlessness. Method. Head-Down Tilt (HDT) -20 degrees rabbit model was used to simulate weightlessness. Twenty four healthy male New Zealand Rabbits were randomly divided into 21 d HDT group,10 d HDT group and control group, (8 in each group). Pressure-volume (P-V) relationship of rabbits femoral veins was measured and the microstructure of the veins was observed. Result. The femoral vein P-V relationship curves of HDT groups showed a larger volume change ratio than that of control group. This change was that 21 d HDT group was even more obvious than that of HDT-10 d group. B1 and B2 in quadratic equations of 21 d HDT group were significantly higher than the values of both 10 d HDT group and control group during expansion (inflow) and collapse (outflow) (P<0.01). The result of histological examination showed that the contents and structure of femoral vein wall of HDT-rabbits changed significantly. Endothelial cells of femoral vein became short and columnar or cubic, some of which fell off. Smooth muscle layer became thinner. Conclusion. Femoral venous compliance increased after weightlessness-simulation and the femoral venous compliance in 21 d-HDT rabbits increased more obviously than that in 10 d-HDT rabbits. The structure of femoral vein wall had changed obviously.

  1. Safe total corporal contouring with large-volume liposuction for the obese patient.

    PubMed

    Dhami, Lakshyajit D; Agarwal, Meenakshi

    2006-01-01

    The advent of the tumescent technique in 1987 allowed for safe total corporal contouring as an ambulatory, single-session megaliposuction with the patient under regional anesthesia supplemented by local anesthetic only in selected areas. Safety and aesthetic issues define large-volume liposuction as having a 5,000-ml aspirate, mega-volume liposuction as having an 8,000-ml aspirate, and giganto-volume liposuction as having an aspirate of 12,000 ml or more. Clinically, a total volume comprising 5,000 ml of fat and wetting solution aspirated during the procedure qualifies for megaliposuction/large-volume liposuction. Between September 2000 and August 2005, 470 cases of liposuction were managed. In 296 (63%) of the 470 cases, the total volume of aspirate exceeded 5 l (range, 5,000-22,000 ml). Concurrent limited or total-block lipectomy was performed in 70 of 296 cases (23.6%). Regional anesthesia with conscious sedation was preferred, except where liposuction targeted areas above the subcostal region (the upper trunk, lateral chest, gynecomastia, breast, arms, and face), or when the patient so desired. Tumescent infiltration was achieved with hypotonic lactated Ringer's solution, adrenalin, triamcinalone, and hyalase in all cases during the last one year of the series. This approach has clinically shown less tissue edema in the postoperative period than with conventional physiologic saline used in place of the Ringer's lactate solution. The amount injected varied from 1,000 to 8,000 ml depending on the size, site, and area. Local anesthetic was included only for the terminal portion of the tumescent mixture, wherever the subcostal regions were infiltrated. The aspirate was restricted to the unstained white/yellow fat, and the amount of fat aspirated did not have any bearing on the amount of solution infiltrated. There were no major complications, and no blood transfusions were administered. The hospital stay ranged from 8 to 24 h for both liposuction and liposuction

  2. Large eddy simulation of forest canopy flow for wildland fire modeling

    Treesearch

    Eric Mueller; William Mell; Albert Simeoni

    2014-01-01

    Large eddy simulation (LES) based computational fluid dynamics (CFD) simulators have obtained increasing attention in the wildland fire research community, as these tools allow the inclusion of important driving physics. However, due to the complexity of the models, individual aspects must be isolated and tested rigorously to ensure meaningful results. As wind is a...

  3. Large Eddy Simulation of Ducted Propulsors in Crashbac

    NASA Astrophysics Data System (ADS)

    Jang, Hyunchul; Mahesh, Krishnan

    2008-11-01

    Flow around a ducted marine propulsor is computed using the large eddy simulation methodology under crashback conditions. Crashback is an operating condition where a propulsor rotates in the reverse direction while the vessel moves in the forward direction. It is characterized by massive flow separation and highly unsteady propeller loads, which affect both blade life and maneuverability. The simulations are performed on unstructured grids using the algorithm developed by Mahesh at al. (2004, J. Comput. Phys 197). The flow is computed at the advance ratio J=-0.7 and Reynolds number Re=480,000 based on the propeller diameter. Average and RMS values of the unsteady loads such as thrust, torque, and side force on the blades and duct are compared to experiment. It is seen that even though effects of the duct on thrust and torque are not large enough, those on the side force are significant. The rms of side forces is much higher in the presence of the duct. Pressure distributions on blade surfaces and duct surface are examined and used to explain this effect. This work was supported by the United States Office of Naval Research under ONR Grant N00014-05-1-0003.

  4. Use of an iPad App to simulate pressure-volume loops and cardiovascular physiology.

    PubMed

    Leisman, Staci; Burkhoff, Daniel

    2017-09-01

    The purpose of this laboratory exercise is to model the changes in preload, afterload, and contractility on a simulated pressure-volume loop and to correlate those findings with common measurements of clinical cardiovascular physiology. Once students have modeled these changes on a healthy heart, the students are asked to look at a simulated case of cardiogenic shock. Effects on preload, contractility, and afterload are explored, as well as the hemodynamic effects of a number of student-suggested treatment strategies. Copyright © 2017 the American Physiological Society.

  5. Quality evaluation of radiographic contrast media in large-volume prefilled syringes and vials.

    PubMed

    Sendo, T; Hirakawa, M; Yaginuma, M; Aoyama, T; Oishi, R

    1998-06-01

    The authors compared the particle contaminations of radiographic contrast media packaged in large-volume prefilled syringes and vials. Particle counting was performed for four contrast media packaged in large-volume prefilled syringes (iohexol, ioversol, ioversol for angiography, and ioxaglate) and three contrast media packaged in vials (iohexol, ioversol, and ioxaglate). X-ray emission spectrometry was performed to characterize the individual particles. The amount of silicone oil in the syringe was quantified with infrared spectrophotometry. The particle contamination in syringes containing ioversol was higher than that in syringes containing iohexol or ioxaglate. Particle contamination in the vials was relatively low, except with ioxaglate. X-ray emission spectrometry of the components of the syringe and vial showed that the source of particles was internal material released from the rubber stopper or inner surface. The particle counts for contrast media packaged in syringes and vials varied considerably among the different contrast media and were related to the amount of silicone oil on the inner surface and rubber piston of the syringe.

  6. Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.

    1996-07-01

    The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.

  7. Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.

    2002-01-01

    Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.

  8. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  9. Computational study of noise in a large signal transduction network.

    PubMed

    Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena

    2011-06-21

    Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.

  10. How to simulate global cosmic strings with large string tension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klaer, Vincent B.; Moore, Guy D., E-mail: vklaer@theorie.ikp.physik.tu-darmstadt.de, E-mail: guy.moore@physik.tu-darmstadt.de

    Global string networks may be relevant in axion production in the early Universe, as well as other cosmological scenarios. Such networks contain a large hierarchy of scales between the string core scale and the Hubble scale, ln( f {sub a} / H ) ∼ 70, which influences the network dynamics by giving the strings large tensions T ≅ π f {sub a} {sup 2} ln( f {sub a} / H ). We present a new numerical approach to simulate such global string networks, capturing the tension without an exponentially large lattice.

  11. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  12. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The

  13. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Astrophysics Data System (ADS)

    Canuto, V. M.

    1994-06-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon

  14. Very large eddy simulation of the Red Sea overflow

    NASA Astrophysics Data System (ADS)

    Ilıcak, Mehmet; Özgökmen, Tamay M.; Peters, Hartmut; Baumert, Helmut Z.; Iskandarani, Mohamed

    Mixing between overflows and ambient water masses is a critical problem of deep-water mass formation in the downwelling branch of the meridional overturning circulation of the ocean. Modeling approaches that have been tested so far rely either on algebraic parameterizations in hydrostatic ocean circulation models, or on large eddy simulations that resolve most of the mixing using nonhydrostatic models. In this study, we examine the performance of a set of turbulence closures, that have not been tested in comparison to observational data for overflows before. We employ the so-called very large eddy simulation (VLES) technique, which allows the use of k-ɛ models in nonhydrostatic models. This is done by applying a dynamic spatial filtering to the k-ɛ equations. To our knowledge, this is the first time that the VLES approach is adopted for an ocean modeling problem. The performance of k-ɛ and VLES models are evaluated by conducting numerical simulations of the Red Sea overflow and comparing them to observations from the Red Sea Outflow Experiment (REDSOX). The computations are constrained to one of the main channels transporting the overflow, which is narrow enough to permit the use of a two-dimensional (and nonhydrostatic) model. A large set of experiments are conducted using different closure models, Reynolds numbers and spatial resolutions. It is found that, when no turbulence closure is used, the basic structure of the overflow, consisting of a well-mixed bottom layer (BL) and entraining interfacial layer (IL), cannot be reproduced. The k-ɛ model leads to unrealistic thicknesses for both BL and IL, while VLES results in the most realistic reproduction of the REDSOX observations.

  15. Quantification of a maximum injection volume of CO2 to avert geomechanical perturbations using a compositional fluid flow reservoir simulator

    NASA Astrophysics Data System (ADS)

    Jung, Hojung; Singh, Gurpreet; Espinoza, D. Nicolas; Wheeler, Mary F.

    2018-02-01

    Subsurface CO2 injection and storage alters formation pressure. Changes of pore pressure may result in fault reactivation and hydraulic fracturing if the pressure exceeds the corresponding thresholds. Most simulation models predict such thresholds utilizing relatively homogeneous reservoir rock models and do not account for CO2 dissolution in the brine phase to calculate pore pressure evolution. This study presents an estimation of reservoir capacity in terms of allowable injection volume and rate utilizing the Frio CO2 injection site in the coast of the Gulf of Mexico as a case study. The work includes laboratory core testing, well-logging data analyses, and reservoir numerical simulation. We built a fine-scale reservoir model of the Frio pilot test in our in-house reservoir simulator IPARS (Integrated Parallel Accurate Reservoir Simulator). We first performed history matching of the pressure transient data of the Frio pilot test, and then used this history-matched reservoir model to investigate the effect of the CO2 dissolution into brine and predict the implications of larger CO2 injection volumes. Our simulation results -including CO2 dissolution- exhibited 33% lower pressure build-up relative to the simulation excluding dissolution. Capillary heterogeneity helps spread the CO2 plume and facilitate early breakthrough. Formation expansivity helps alleviate pore pressure build-up. Simulation results suggest that the injection schedule adopted during the actual pilot test very likely did not affect the mechanical integrity of the storage complex. Fault reactivation requires injection volumes of at least about sixty times larger than the actual injected volume at the same injection rate. Hydraulic fracturing necessitates much larger injection rates than the ones used in the Frio pilot test. Tested rock samples exhibit ductile deformation at in-situ effective stresses. Hence, we do not expect an increase of fault permeability in the Frio sand even in the presence of

  16. Multiscale Data Assimilation for Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Li, Z.; Cheng, X.; Gustafson, W. I., Jr.; Xiao, H.; Vogelmann, A. M.; Endo, S.; Toto, T.

    2017-12-01

    Large-eddy simulation (LES) is a powerful tool for understanding atmospheric turbulence, boundary layer physics and cloud development, and there is a great need for developing data assimilation methodologies that can constrain LES models. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) User Facility has been developing the capability to routinely generate ensembles of LES. The LES ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/modeling/lasso) is generating simulations for shallow convection days at the ARM Southern Great Plains site in Oklahoma. One of major objectives of LASSO is to develop the capability to observationally constrain LES using a hierarchy of ARM observations. We have implemented a multiscale data assimilation (MSDA) scheme, which allows data assimilation to be implemented separately for distinct spatial scales, so that the localized observations can be effectively assimilated to constrain the mesoscale fields in the LES area of about 15 km in width. The MSDA analysis is used to produce forcing data that drive LES. With such LES workflow we have examined 13 days with shallow convection selected from the period May-August 2016. We will describe the implementation of MSDA, present LES results, and address challenges and opportunities for applying data assimilation to LES studies.

  17. LASSIE: simulating large-scale models of biochemical systems on GPUs.

    PubMed

    Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo

    2017-05-10

    Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE

  18. ADHydro: A Large-scale High Resolution Multi-Physics Distributed Water Resources Model for Water Resource Simulations in a Parallel Computing Environment

    NASA Astrophysics Data System (ADS)

    lai, W.; Steinke, R. C.; Ogden, F. L.

    2013-12-01

    Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents development of a physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, includes: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow and water management. The ADHydro model uses the explicit finite volume method to solve PDEs for 2D overland flow, 2D saturated groundwater flow coupled to 1D channel flow. The model has a quasi-3D formulation that couples 2D overland flow and 2D saturated groundwater flow using the 1D Talbot-Ogden finite water-content infiltration and redistribution model. This eliminates difficulties in solving the highly nonlinear 3D Richards equation, while the finite volume Talbot-Ogden infiltration solution is computationally efficient, guaranteed to conserve mass, and allows simulation of the effect of near-surface groundwater tables on runoff generation. The process-level components of the model are being individually tested and validated. The model as a whole will be tested on the Green River basin in Wyoming and ultimately applied to the entire Upper Colorado River basin. ADHydro development has necessitated development of tools for large-scale watershed modeling, including open-source workflow steps to extract hydromorphological information from GIS data, integrate hydrometeorological and water management forcing input, and post-processing and visualization of large output data

  19. The effectiveness of using computer simulated experiments on junior high students' understanding of the volume displacement concept

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Soon; Gennaro, Eugene

    Several researchers have suggested that the computer holds much promise as a tool for science teachers for use in their classrooms (Bork, 1979, Lunetta & Hofstein, 1981). It also has been said that there needs to be more research in determining the effectiveness of computer software (Tinker, 1983).This study compared the effectiveness of microcomputer simulated experiences with that of parallel instruction involving hands-on laboratory experiences for teaching the concept of volume displacement to junior high school students. This study also assessed the differential effect on students' understanding of the volume displacement concept using sex of the students as another independent variable. In addition, it compared the degree of retention, after 45 days, of both treatment groups.It was found that computer simulated experiences were as effective as hands-on laboratory experiences, and that males, having had hands-on laboratory experiences, performed better on the posttest than females having had the hands-on laboratory experiences. There were no significant differences in performance when comparing males with females using the computer simulation in the learning of the displacement concept. This study also showed that there were no significant differences in the retention levels when the retention scores of the computer simulation groups were compared to those that had the hands-on laboratory experiences. However, an ANOVA of the retention test scores revealed that males in both treatment conditions retained knowledge of volume displacement better than females.

  20. A Mixed Finite Volume Element Method for Flow Calculations in Porous Media

    NASA Technical Reports Server (NTRS)

    Jones, Jim E.

    1996-01-01

    A key ingredient in the simulation of flow in porous media is the accurate determination of the velocities that drive the flow. The large scale irregularities of the geology, such as faults, fractures, and layers suggest the use of irregular grids in the simulation. Work has been done in applying the finite volume element (FVE) methodology as developed by McCormick in conjunction with mixed methods which were developed by Raviart and Thomas. The resulting mixed finite volume element discretization scheme has the potential to generate more accurate solutions than standard approaches. The focus of this paper is on a multilevel algorithm for solving the discrete mixed FVE equations. The algorithm uses a standard cell centered finite difference scheme as the 'coarse' level and the more accurate mixed FVE scheme as the 'fine' level. The algorithm appears to have potential as a fast solver for large size simulations of flow in porous media.

  1. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  2. Cut-cell method based large-eddy simulation of tip-leakage flow

    NASA Astrophysics Data System (ADS)

    Pogorelov, Alexej; Meinke, Matthias; Schröder, Wolfgang

    2015-07-01

    The turbulent low Mach number flow through an axial fan at a Reynolds number of 9.36 × 105 based on the outer casing diameter is investigated by large-eddy simulation. A finite-volume flow solver in an unstructured hierarchical Cartesian setup for the compressible Navier-Stokes equations is used. To account for sharp edges, a fully conservative cut-cell approach is applied. A newly developed rotational periodic boundary condition for Cartesian meshes is introduced such that the simulations are performed just for a 72° segment, i.e., the flow field over one out of five axial blades is resolved. The focus of this numerical analysis is on the development of the vortical flow structures in the tip-gap region. A detailed grid convergence study is performed on four computational grids with 50 × 106, 250 × 106, 1 × 109, and 1.6 × 109 cells. Results of the instantaneous and the mean fan flow field are thoroughly analyzed based on the solution with 1 × 109 cells. High levels of turbulent kinetic energy and pressure fluctuations are generated by a tip-gap vortex upstream of the blade, the separating vortices inside the tip gap, and a counter-rotating vortex on the outer casing wall. An intermittent interaction of the turbulent wake, generated by the tip-gap vortex, with the downstream blade, leads to a cyclic transition with high pressure fluctuations on the suction side of the blade and a decay of the tip-gap vortex. The disturbance of the tip-gap vortex results in an unsteady behavior of the turbulent wake causing the intermittent interaction. For this interaction and the cyclic transition, two dominant frequencies are identified which perfectly match with the characteristic frequencies in the experimental sound power level and therefore explain their physical origin.

  3. Incarceration of umbilical hernia: a rare complication of large volume paracentesis.

    PubMed

    Khodarahmi, Iman; Shahid, Muhammad Usman; Contractor, Sohail

    2015-09-01

    We present two cases of umbilical hernia incarceration following large volume paracentesis (LVP) in patients with cirrhotic ascites. Both patients became symptomatic within 48 hours after the LVP. Although being rare, given the significantly higher mortality rate of cirrhotic patients undergoing emergent herniorrhaphy, this complication of LVP is potentially serious. Therefore, it is recommended that patients be examined closely for the presence of umbilical hernias before removal of ascitic fluid and an attempt should be made for external reduction of easily reducible hernias, if a hernia is present.

  4. Incarceration of umbilical hernia: a rare complication of large volume paracentesis

    PubMed Central

    Khodarahmi, Iman; Shahid, Muhammad Usman; Contractor, Sohail

    2015-01-01

    We present two cases of umbilical hernia incarceration following large volume paracentesis (LVP) in patients with cirrhotic ascites. Both patients became symptomatic within 48 hours after the LVP. Although being rare, given the significantly higher mortality rate of cirrhotic patients undergoing emergent herniorrhaphy, this complication of LVP is potentially serious. Therefore, it is recommended that patients be examined closely for the presence of umbilical hernias before removal of ascitic fluid and an attempt should be made for external reduction of easily reducible hernias, if a hernia is present. PMID:26629305

  5. Large Eddy Simulation of High-Speed, Premixed Ethylene Combustion

    NASA Technical Reports Server (NTRS)

    Ramesh, Kiran; Edwards, Jack R.; Chelliah, Harsha; Goyne, Christopher; McDaniel, James; Rockwell, Robert; Kirik, Justin; Cutler, Andrew; Danehy, Paul

    2015-01-01

    A large-eddy simulation / Reynolds-averaged Navier-Stokes (LES/RANS) methodology is used to simulate premixed ethylene-air combustion in a model scramjet designed for dual mode operation and equipped with a cavity for flameholding. A 22-species reduced mechanism for ethylene-air combustion is employed, and the calculations are performed on a mesh containing 93 million cells. Fuel plumes injected at the isolator entrance are processed by the isolator shock train, yielding a premixed fuel-air mixture at an equivalence ratio of 0.42 at the cavity entrance plane. A premixed flame is anchored within the cavity and propagates toward the opposite wall. Near complete combustion of ethylene is obtained. The combustor is highly dynamic, exhibiting a large-scale oscillation in global heat release and mass flow rate with a period of about 2.8 ms. Maximum heat release occurs when the flame front reaches its most downstream extent, as the flame surface area is larger. Minimum heat release is associated with flame propagation toward the cavity and occurs through a reduction in core flow velocity that is correlated with an upstream movement of the shock train. Reasonable agreement between simulation results and available wall pressure, particle image velocimetry, and OH-PLIF data is obtained, but it is not yet clear whether the system-level oscillations seen in the calculations are actually present in the experiment.

  6. A simulation study of Large Area Crop Inventory Experiment (LACIE) technology

    NASA Technical Reports Server (NTRS)

    Ziegler, L. (Principal Investigator); Potter, J.

    1979-01-01

    The author has identified the following significant results. The LACIE performance predictor (LPP) was used to replicate LACIE phase 2 for a 15 year period, using accuracy assessment results for phase 2 error components. Results indicated that the (LPP) simulated the LACIE phase 2 procedures reasonably well. For the 15 year simulation, only 7 of the 15 production estimates were within 10 percent of the true production. The simulations indicated that the acreage estimator, based on CAMS phase 2 procedures, has a negative bias. This bias was too large to support the 90/90 criterion with the CV observed and simulated for the phase 2 production estimator. Results of this simulation study validate the theory that the acreage variance estimator in LACIE was conservative.

  7. A dual resolution measurement based Monte Carlo simulation technique for detailed dose analysis of small volume organs in the skull base region

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi

    2014-11-01

    The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm3] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm3 and was sandwiched in between 0.05×0.05×0.3 cm3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×108 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular canal. Dose

  8. Large eddy simulation of a wing-body junction flow

    NASA Astrophysics Data System (ADS)

    Ryu, Sungmin; Emory, Michael; Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca

    2014-11-01

    We present numerical simulations of the wing-body junction flow experimentally investigated by Devenport & Simpson (1990). Wall-junction flows are common in engineering applications but relevant flow physics close to the corner region is not well understood. Moreover, performance of turbulence models for the body-junction case is not well characterized. Motivated by the insufficient investigations, we have numerically investigated the case with Reynolds-averaged Naiver-Stokes equation (RANS) and Large Eddy Simulation (LES) approaches. The Vreman model applied for the LES and SST k- ω model for the RANS simulation are validated focusing on the ability to predict turbulence statistics near the junction region. Moreover, a sensitivity study of the form of the Vreman model will also be presented. This work is funded under NASA Cooperative Agreement NNX11AI41A (Technical Monitor Dr. Stephen Woodruff)

  9. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    NASA Technical Reports Server (NTRS)

    Oefelein, Joseph C.; Garcia, Roberto (Technical Monitor)

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  10. SimGen: A General Simulation Method for Large Systems.

    PubMed

    Taylor, William R

    2017-02-03

    SimGen is a stand-alone computer program that reads a script of commands to represent complex macromolecules, including proteins and nucleic acids, in a structural hierarchy that can then be viewed using an integral graphical viewer or animated through a high-level application programming interface in C++. Structural levels in the hierarchy range from α-carbon or phosphate backbones through secondary structure to domains, molecules, and multimers with each level represented in an identical data structure that can be manipulated using the application programming interface. Unlike most coarse-grained simulation approaches, the higher-level objects represented in SimGen can be soft, allowing the lower-level objects that they contain to interact directly. The default motion simulated by SimGen is a Brownian-like diffusion that can be set to occur across all levels of representation in the hierarchy. Links can also be defined between objects, which, when combined with large high-level random movements, result in an effective search strategy for constraint satisfaction, including structure prediction from predicted pairwise distances. The implementation of SimGen makes use of the hierarchic data structure to avoid unnecessary calculation, especially for collision detection, allowing it to be simultaneously run and viewed on a laptop computer while simulating large systems of over 20,000 objects. It has been used previously to model complex molecular interactions including the motion of a myosin-V dimer "walking" on an actin fibre, RNA stem-loop packing, and the simulation of cell motion and aggregation. Several extensions to this original functionality are described. Copyright © 2016 The Francis Crick Institute. Published by Elsevier Ltd.. All rights reserved.

  11. Large-Eddy Simulation of Turbulent Wall-Pressure Fluctuations

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.

    1996-01-01

    Large-eddy simulations of a turbulent boundary layer with Reynolds number based on displacement thickness equal to 3500 were performed with two grid resolutions. The computations were continued for sufficient time to obtain frequency spectra with resolved frequencies that correspond to the most important structural frequencies on an aircraft fuselage. The turbulent stresses were adequately resolved with both resolutions. Detailed quantitative analysis of a variety of statistical quantities associated with the wall-pressure fluctuations revealed similar behavior for both simulations. The primary differences were associated with the lack of resolution of the high-frequency data in the coarse-grid calculation and the increased jitter (due to the lack of multiple realizations for averaging purposes) in the fine-grid calculation. A new curve fit was introduced to represent the spanwise coherence of the cross-spectral density.

  12. A New Electropositive Filter for Concentrating Enterovirus and Norovirus from Large Volumes of Water - MCEARD

    EPA Science Inventory

    The detection of enteric viruses in environmental water usually requires the concentration of viruses from large volumes of water. The 1MDS electropositive filter is commonly used for concentrating enteric viruses from water but unfortunately these filters are not cost-effective...

  13. Development of a Solid Phase Extraction Method for Agricultural Pesticides in Large-Volume Water Samples

    EPA Science Inventory

    An analytical method using solid phase extraction (SPE) and analysis by gas chromatography/mass spectrometry (GC/MS) was developed for the trace determination of a variety of agricultural pesticides and selected transformation products in large-volume high-elevation lake water sa...

  14. Effects of Eddy Viscosity on Time Correlations in Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    He, Guowei; Rubinstein, R.; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Subgrid-scale (SGS) models for large. eddy simulation (LES) have generally been evaluated by their ability to predict single-time statistics of turbulent flows such as kinetic energy and Reynolds stresses. Recent application- of large eddy simulation to the evaluation of sound sources in turbulent flows, a problem in which time, correlations determine the frequency distribution of acoustic radiation, suggest that subgrid models should also be evaluated by their ability to predict time correlations in turbulent flows. This paper compares the two-point, two-time Eulerian velocity correlation evaluated from direct numerical simulation (DNS) with that evaluated from LES, using a spectral eddy viscosity, for isotropic homogeneous turbulence. It is found that the LES fields are too coherent, in the sense that their time correlations decay more slowly than the corresponding time. correlations in the DNS fields. This observation is confirmed by theoretical estimates of time correlations using the Taylor expansion technique. Tile reason for the slower decay is that the eddy viscosity does not include the random backscatter, which decorrelates fluid motion at large scales. An effective eddy viscosity associated with time correlations is formulated, to which the eddy viscosity associated with energy transfer is a leading order approximation.

  15. Evidence for Decreased Brain Parenchymal Volume After Large Intracerebral Hemorrhages: a Potential Mechanism Limiting Intracranial Pressure Rises.

    PubMed

    Williamson, Michael R; Colbourne, Frederick

    2017-08-01

    Potentially fatal intracranial pressure (ICP) rises commonly occur after large intracerebral hemorrhages (ICH). We monitored ICP after infusing 100-160 μL of autologous blood (vs. 0 μL control) into the striatum of rats in order to test the validity of this common model with regard to ICP elevations. Other endpoints included body temperature, behavioral impairment, lesion volume, and edema. Also, we evaluated hippocampal CA1 sector and somatosensory cortical neuron morphology to assess whether global ischemic injury occurred. Despite massive blood infusions, ICP only modestly increased (160 μL 10.8 ± 2.1 mmHg for <36 h vs. control 3.4 ± 0.5 mmHg), with little peri-hematoma edema at 3 days. Body temperature was not affected. Behavioral deficits and tissue loss were infusion volume-dependent. There was no histological evidence of hippocampal or cortical injury, indicating that cell death was confined to the hematoma and closely surrounding tissue. Surprisingly, the most severe hemorrhages significantly increased cell density (~15-20%) and reduced cell body size (~30%) in regions outside the injury site. Additionally, decreased cell size and increased density were observed after collagenase-induced ICH. Parenchymal volume is seemingly reduced after large ICH. Thus, in addition to well-known compliance mechanisms (e.g., displacement of cerebrospinal fluid and cerebral blood), reduced brain parenchymal volume appears to limit ICP rises in rodents with very large mass lesions.

  16. Can Atmospheric Reanalysis Data Sets Be Used to Reproduce Flooding Over Large Scales?

    NASA Astrophysics Data System (ADS)

    Andreadis, Konstantinos M.; Schumann, Guy J.-P.; Stampoulis, Dimitrios; Bates, Paul D.; Brakenridge, G. Robert; Kettner, Albert J.

    2017-10-01

    Floods are costly to global economies and can be exceptionally lethal. The ability to produce consistent flood hazard maps over large areas could provide a significant contribution to reducing such losses, as the lack of knowledge concerning flood risk is a major factor in the transformation of river floods into flood disasters. In order to accurately reproduce flooding in river channels and floodplains, high spatial resolution hydrodynamic models are needed. Despite being computationally expensive, recent advances have made their continental to global implementation feasible, although inputs for long-term simulations may require the use of reanalysis meteorological products especially in data-poor regions. We employ a coupled hydrologic/hydrodynamic model cascade forced by the 20CRv2 reanalysis data set and evaluate its ability to reproduce flood inundation area and volume for Australia during the 1973-2012 period. Ensemble simulations using the reanalysis data were performed to account for uncertainty in the meteorology and compared with a validated benchmark simulation. Results show that the reanalysis ensemble capture the inundated areas and volumes relatively well, with correlations for the ensemble mean of 0.82 and 0.85 for area and volume, respectively, although the meteorological ensemble spread propagates in large uncertainty of the simulated flood characteristics.

  17. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  18. Large-Scale Simulation of Multi-Asset Ising Financial Markets

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2017-03-01

    We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.

  19. Wall-Resolved Large-Eddy Simulation of Flow Separation Over NASA Wall-Mounted Hump

    NASA Technical Reports Server (NTRS)

    Uzun, Ali; Malik, Mujeeb R.

    2017-01-01

    This paper reports the findings from a study that applies wall-resolved large-eddy simulation to investigate flow separation over the NASA wall-mounted hump geometry. Despite its conceptually simple flow configuration, this benchmark problem has proven to be a challenging test case for various turbulence simulation methods that have attempted to predict flow separation arising from the adverse pressure gradient on the aft region of the hump. The momentum-thickness Reynolds number of the incoming boundary layer has a value that is near the upper limit achieved by recent direct numerical simulation and large-eddy simulation of incompressible turbulent boundary layers. The high Reynolds number of the problem necessitates a significant number of grid points for wall-resolved calculations. The present simulations show a significant improvement in the separation-bubble length prediction compared to Reynolds-Averaged Navier-Stokes calculations. The current simulations also provide good overall prediction of the skin-friction distribution, including the relaminarization observed over the front portion of the hump due to the strong favorable pressure gradient. We discuss a number of problems that were encountered during the course of this work and present possible solutions. A systematic study regarding the effect of domain span, subgrid-scale model, tunnel back pressure, upstream boundary layer conditions and grid refinement is performed. The predicted separation-bubble length is found to be sensitive to the span of the domain. Despite the large number of grid points used in the simulations, some differences between the predictions and experimental observations still exist (particularly for Reynolds stresses) in the case of the wide-span simulation, suggesting that additional grid resolution may be required.

  20. Planetary Structures And Simulations Of Large-scale Impacts On Mars

    NASA Astrophysics Data System (ADS)

    Swift, Damian; El-Dasher, B.

    2009-09-01

    The impact of large meteroids is a possible cause for isolated orogeny on bodies devoid of tectonic activity. On Mars, there is a significant, but not perfect, correlation between large, isolated volcanoes and antipodal impact craters. On Mercury and the Moon, brecciated terrain and other unusual surface features can be found at the antipodes of large impact sites. On Earth, there is a moderate correlation between long-lived mantle hotspots at opposite sides of the planet, with meteoroid impact suggested as a possible cause. If induced by impacts, the mechanisms of orogeny and volcanism thus appear to vary between these bodies, presumably because of differences in internal structure. Continuum mechanics (hydrocode) simulations have been used to investigate the response of planetary bodies to impacts, requiring assumptions about the structure of the body: its composition and temperature profile, and the constitutive properties (equation of state, strength, viscosity) of the components. We are able to predict theoretically and test experimentally the constitutive properties of matter under planetary conditions, with reasonable accuracy. To provide a reference series of simulations, we have constructed self-consistent planetary structures using simplified compositions (Fe core and basalt-like mantle), which turn out to agree surprisingly well with the moments of inertia. We have performed simulations of large-scale impacts, studying the transmission of energy to the antipodes. For Mars, significant antipodal heating to depths of a few tens of kilometers was predicted from compression waves transmitted through the mantle. Such heating is a mechanism for volcanism on Mars, possibly in conjunction with crustal cracking induced by surface waves. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  1. Tumor-volume simulation during radiotherapy for head-and-neck cancer using a four-level cell population model.

    PubMed

    Chvetsov, Alexei V; Dong, Lei; Palta, Jantinder R; Amdur, Robert J

    2009-10-01

    To develop a fast computational radiobiologic model for quantitative analysis of tumor volume during fractionated radiotherapy. The tumor-volume model can be useful for optimizing image-guidance protocols and four-dimensional treatment simulations in proton therapy that is highly sensitive to physiologic changes. The analysis is performed using two approximations: (1) tumor volume is a linear function of total cell number and (2) tumor-cell population is separated into four subpopulations: oxygenated viable cells, oxygenated lethally damaged cells, hypoxic viable cells, and hypoxic lethally damaged cells. An exponential decay model is used for disintegration and removal of oxygenated lethally damaged cells from the tumor. We tested our model on daily volumetric imaging data available for 14 head-and-neck cancer patients treated with an integrated computed tomography/linear accelerator system. A simulation based on the averaged values of radiobiologic parameters was able to describe eight cases during the entire treatment and four cases partially (50% of treatment time) with a maximum 20% error. The largest discrepancies between the model and clinical data were obtained for small tumors, which may be explained by larger errors in the manual tumor volume delineation procedure. Our results indicate that the change in gross tumor volume for head-and-neck cancer can be adequately described by a relatively simple radiobiologic model. In future research, we propose to study the variation of model parameters by fitting to clinical data for a cohort of patients with head-and-neck cancer and other tumors. The potential impact of other processes, like concurrent chemotherapy, on tumor volume should be evaluated.

  2. The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation

    NASA Astrophysics Data System (ADS)

    Noh, Yookyung

    The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.

  3. A volume-of-fluid method for simulation of compressible axisymmetric multi-material flow

    NASA Astrophysics Data System (ADS)

    de Niem, D.; Kührt, E.; Motschmann, U.

    2007-02-01

    A two-dimensional Eulerian hydrodynamic method for the numerical simulation of inviscid compressible axisymmetric multi-material flow in external force fields for the situation of pure fluids separated by macroscopic interfaces is presented. The method combines an implicit Lagrangian step with an explicit Eulerian advection step. Individual materials obey separate energy equations, fulfill general equations of state, and may possess different temperatures. Material volume is tracked using a piecewise linear volume-of-fluid method. An overshoot-free logically simple and economic material advection algorithm for cylinder coordinates is derived, in an algebraic formulation. New aspects arising in the case of more than two materials such as the material ordering strategy during transport are presented. One- and two-dimensional numerical examples are given.

  4. Hierarchical imaging: a new concept for targeted imaging of large volumes from cells to tissues.

    PubMed

    Wacker, Irene; Spomer, Waldemar; Hofmann, Andreas; Thaler, Marlene; Hillmer, Stefan; Gengenbach, Ulrich; Schröder, Rasmus R

    2016-12-12

    Imaging large volumes such as entire cells or small model organisms at nanoscale resolution seemed an unrealistic, rather tedious task so far. Now, technical advances have lead to several electron microscopy (EM) large volume imaging techniques. One is array tomography, where ribbons of ultrathin serial sections are deposited on solid substrates like silicon wafers or glass coverslips. To ensure reliable retrieval of multiple ribbons from the boat of a diamond knife we introduce a substrate holder with 7 axes of translation or rotation specifically designed for that purpose. With this device we are able to deposit hundreds of sections in an ordered way in an area of 22 × 22 mm, the size of a coverslip. Imaging such arrays in a standard wide field fluorescence microscope produces reconstructions with 200 nm lateral resolution and 100 nm (the section thickness) resolution in z. By hierarchical imaging cascades in the scanning electron microscope (SEM), using a new software platform, we can address volumes from single cells to complete organs. In our first example, a cell population isolated from zebrafish spleen, we characterize different cell types according to their organelle inventory by segmenting 3D reconstructions of complete cells imaged with nanoscale resolution. In addition, by screening large numbers of cells at decreased resolution we can define the percentage at which different cell types are present in our preparation. With the second example, the root tip of cress, we illustrate how combining information from intermediate resolution data with high resolution data from selected regions of interest can drastically reduce the amount of data that has to be recorded. By imaging only the interesting parts of a sample considerably less data need to be stored, handled and eventually analysed. Our custom-designed substrate holder allows reproducible generation of section libraries, which can then be imaged in a hierarchical way. We demonstrate, that EM

  5. Transient Analysis Generator /TAG/ simulates behavior of large class of electrical networks

    NASA Technical Reports Server (NTRS)

    Thomas, W. J.

    1967-01-01

    Transient Analysis Generator program simulates both transient and dc steady-state behavior of a large class of electrical networks. It generates a special analysis program for each circuit described in an easily understood and manipulated programming language. A generator or preprocessor and a simulation system make up the TAG system.

  6. Large-eddy simulations with wall models

    NASA Technical Reports Server (NTRS)

    Cabot, W.

    1995-01-01

    The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.

  7. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  8. SU-E-T-480: Radiobiological Dose Comparison of Single Fraction SRS, Multi-Fraction SRT and Multi-Stage SRS of Large Target Volumes Using the Linear-Quadratic Formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, C; Hrycushko, B; Jiang, S

    2014-06-01

    Purpose: To compare the radiobiological effect on large tumors and surrounding normal tissues from single fraction SRS, multi-fractionated SRT, and multi-staged SRS treatment. Methods: An anthropomorphic head phantom with a centrally located large volume target (18.2 cm{sup 3}) was scanned using a 16 slice large bore CT simulator. Scans were imported to the Multiplan treatment planning system where a total prescription dose of 20Gy was used for a single, three staged and three fractionated treatment. Cyber Knife treatment plans were inversely optimized for the target volume to achieve at least 95% coverage of the prescription dose. For the multistage plan,more » the target was segmented into three subtargets having similar volume and shape. Staged plans for individual subtargets were generated based on a planning technique where the beam MUs of the original plan on the total target volume are changed by weighting the MUs based on projected beam lengths within each subtarget. Dose matrices for each plan were export in DICOM format and used to calculate equivalent dose distributions in 2Gy fractions using an alpha beta ratio of 10 for the target and 3 for normal tissue. Results: Singe fraction SRS, multi-stage plan and multi-fractionated SRT plans had an average 2Gy dose equivalent to the target of 62.89Gy, 37.91Gy and 33.68Gy, respectively. The normal tissue within 12Gy physical dose region had an average 2Gy dose equivalent of 29.55Gy, 16.08Gy and 13.93Gy, respectively. Conclusion: The single fraction SRS plan had the largest predicted biological effect for the target and the surrounding normal tissue. The multi-stage treatment provided for a more potent biologically effect on target compared to the multi-fraction SRT treatments with less biological normal tissue than single-fraction SRS treatment.« less

  9. ANALYSIS OF LOW-LEVEL PESTICIDES FROM HIGH-ELEVATION LAKE WATERS BY LARGE VOLUME INJECTION GCMS

    EPA Science Inventory

    This paper describes the method development for the determination of ultra-low level pesticides from high-elevation lake waters by large-volume injection programmable temperature vaporizer (LVI-PTV) GC/MS. This analytical method is developed as a subtask of a larger study, backgr...

  10. Coupled large-eddy simulation and morphodynamics of a large-scale river under extreme flood conditions

    NASA Astrophysics Data System (ADS)

    Khosronejad, Ali; Sotiropoulos, Fotis; Stony Brook University Team

    2016-11-01

    We present a coupled flow and morphodynamic simulations of extreme flooding in 3 km long and 300 m wide reach of the Mississippi River in Minnesota, which includes three islands and hydraulic structures. We employ the large-eddy simulation (LES) and bed-morphodynamic modules of the VFS-Geophysics model to investigate the flow and bed evolution of the river during a 500 year flood. The coupling of the two modules is carried out via a fluid-structure interaction approach using a nested domain approach to enhance the resolution of bridge scour predictions. The geometrical data of the river, islands and structures are obtained from LiDAR, sub-aqueous sonar and in-situ surveying to construct a digital map of the river bathymetry. Our simulation results for the bed evolution of the river reveal complex sediment dynamics near the hydraulic structures. The numerically captured scour depth near some of the structures reach a maximum of about 10 m. The data-driven simulation strategy we present in this work exemplifies a practical simulation-based-engineering-approach to investigate the resilience of infrastructures to extreme flood events in intricate field-scale riverine systems. This work was funded by a Grant from Minnesota Dept. of Transportation.

  11. Large-scale Thermo-Hydro-Mechanical Simulations in Complex Geological Environments

    NASA Astrophysics Data System (ADS)

    Therrien, R.; Lemieux, J.

    2011-12-01

    The study of a potential deep repository for radioative waste disposal in Canada context requires simulation capabilities for thermo-hydro-mechanical processes. It is expected that the host rock for the deep repository will be subjected to a variety of stresses during its lifetime such as in situ stresses in the rock, stressed caused by excavation of the repository and thermo-mechanical stresses. Another stress of concern for future Canadian climates will results from various episodes of glaciation. In that case, it can be expected that over 3 km of ice may be present over the land mass, which will create a glacial load that will be transmitted to the underlying geological materials and therefore impact their mechanical and hydraulic responses. Glacial loading will affect pore fluid pressures in the subsurface, which will in turn affect groundwater velocities and the potential migration of radionuclides from the repository. In addition, permafrost formation and thawing resulting from glacial advance and retreat will modify the bulk hydraulic of the geological materials and will have a potentially large impact on groundwater flow patterns, especially groundwater recharge. In the context of a deep geological repository for spent nuclear fuel, the performance of the repository to contain the spent nuclear fuel must be evaluated for periods that span several hundred thousand years. The time-frame for thermo-hydro-mechanical simulations is therefore extremely long and efficient numerical techniques must be developed. Other challenges are the representation of geological formations that have potentially complex geometries and physical properties and may contain fractures. The spatial extent of the simulation domain is also very large and can potentially reach the size of a sedimentary basin. Mass transport must also be considered because the fluid salinity in a sedimentary basin can be highly variable and the effect of fluid density on groundwater flow must be accounted

  12. Using Mesoscale Weather Model Output as Boundary Conditions for Atmospheric Large-Eddy Simulations and Wind-Plant Aerodynamic Simulations (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M. J.; Michalakes, J.; Vanderwende, B.

    Wind plant aerodynamics are directly affected by the microscale weather, which is directly influenced by the mesoscale weather. Microscale weather refers to processes that occur within the atmospheric boundary layer with the largest scales being a few hundred meters to a few kilometers depending on the atmospheric stability of the boundary layer. Mesoscale weather refers to large weather patterns, such as weather fronts, with the largest scales being hundreds of kilometers wide. Sometimes microscale simulations that capture mesoscale-driven variations (changes in wind speed and direction over time or across the spatial extent of a wind plant) are important in windmore » plant analysis. In this paper, we present our preliminary work in coupling a mesoscale weather model with a microscale atmospheric large-eddy simulation model. The coupling is one-way beginning with the weather model and ending with a computational fluid dynamics solver using the weather model in coarse large-eddy simulation mode as an intermediary. We simulate one hour of daytime moderately convective microscale development driven by the mesoscale data, which are applied as initial and boundary conditions to the microscale domain, at a site in Iowa. We analyze the time and distance necessary for the smallest resolvable microscales to develop.« less

  13. Gas-Grain Simulation Facility (GGSF). Volume 2: Conceptual design definition

    NASA Technical Reports Server (NTRS)

    Zamel, James M.

    1993-01-01

    This document is Volume 2 of the Final Report for the Phase A Study of the Gas-Grain Simulation Facility (GGSF), and presents the GGSF Conceptual Design. It is a follow-on to the Volume 1 Facility Definition Study, NASA report CR 177606. The development of a conceptual design for a Space Station Freedom (SSF) facility that will be used for investigating particle interactions in varying environments, including various gas mixtures, pressures, and temperatures is delineated. It's not possible to study these experiments on earth due to the long reaction times associated with this type of phenomena, hence the need for extended periods of microgravity. The particle types will vary in composition (solids and liquids), sizes (from submicrons to centimeters), and concentrations (from single particles to 10(exp 10) per cubic centimeter). The results of the experiments pursued in the GGSF will benefit a variety of scientific inquiries. These investigations span such diverse topics as the formation of planets and planetary rings, cloud and haze processes in planetary atmospheres, the composition and structure of astrophysical objects, and the viability of airborne microbes (e.g., in a manned spacecraft).

  14. Recent progress in simulating galaxy formation from the largest to the smallest scales

    NASA Astrophysics Data System (ADS)

    Faucher-Giguère, Claude-André

    2018-05-01

    Galaxy formation simulations are an essential part of the modern toolkit of astrophysicists and cosmologists alike. Astrophysicists use the simulations to study the emergence of galaxy populations from the Big Bang, as well as the formation of stars and supermassive black holes. For cosmologists, galaxy formation simulations are needed to understand how baryonic processes affect measurements of dark matter and dark energy. Owing to the extreme dynamic range of galaxy formation, advances are driven by novel approaches using simulations with different tradeoffs between volume and resolution. Large-volume but low-resolution simulations provide the best statistics, while higher-resolution simulations of smaller cosmic volumes can be evolved with self-consistent physics and reveal important emergent phenomena. I summarize recent progress in galaxy formation simulations, including major developments in the past five years, and highlight some key areas likely to drive further advances over the next decade.

  15. Large Eddy Simulation of Gravitational Effects on Transitional and Turbulent Gas-Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Jaberi, Farhad A.

    2001-01-01

    The basic objective of this work is to assess the influence of gravity on "the compositional and the spatial structures" of transitional and turbulent diffusion flames via large eddy simulation (LES), and direct numerical simulation (DNS). The DNS is conducted for appraisal of the various closures employed in LES, and to study the effect of buoyancy on the small scale flow features. The LES is based on our "filtered mass density function"' (FMDF) model. The novelty of the methodology is that it allows for reliable simulations with inclusion of "realistic physics." It also allows for detailed analysis of the unsteady large scale flow evolution and compositional flame structure which is not usually possible via Reynolds averaged simulations.

  16. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    NASA Astrophysics Data System (ADS)

    Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.

    2013-04-01

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation

  17. A large eddy simulation scheme for turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1993-01-01

    The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.

  18. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  19. Lattice Boltzmann method for simulating the viscous flow in large distensible blood vessels

    NASA Astrophysics Data System (ADS)

    Fang, Haiping; Wang, Zuowei; Lin, Zhifang; Liu, Muren

    2002-05-01

    A lattice Boltzmann method for simulating the viscous flow in large distensible blood vessels is presented by introducing a boundary condition for elastic and moving boundaries. The mass conservation for the boundary condition is tested in detail. The viscous flow in elastic vessels is simulated with a pressure-radius relationship similar to that of the pulmonary blood vessels. The numerical results for steady flow agree with the analytical prediction to very high accuracy, and the simulation results for pulsatile flow are comparable with those of the aortic flows observed experimentally. The model is expected to find many applications for studying blood flows in large distensible arteries, especially in those suffering from atherosclerosis, stenosis, aneurysm, etc.

  20. Large eddy simulation of turbulent cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, A.; Mahesh, K.

    2015-12-01

    Large Eddy Simulation is employed to study two turbulent cavitating flows: over a cylinder and a wedge. A homogeneous mixture model is used to treat the mixture of water and water vapor as a compressible fluid. The governing equations are solved using a novel predictor- corrector method. The subgrid terms are modeled using the Dynamic Smagorinsky model. Cavitating flow over a cylinder at Reynolds number (Re) = 3900 and cavitation number (σ) = 1.0 is simulated and the wake characteristics are compared to the single phase results at the same Reynolds number. It is observed that cavitation suppresses turbulence in the near wake and delays three dimensional breakdown of the vortices. Next, cavitating flow over a wedge at Re = 200, 000 and σ = 2.0 is presented. The mean void fraction profiles obtained are compared to experiment and good agreement is obtained. Cavity auto-oscillation is observed, where the sheet cavity breaks up into a cloud cavity periodically. The results suggest LES as an attractive approach for predicting turbulent cavitating flows.

  1. Fan-beam scanning laser optical computed tomography for large volume dosimetry

    NASA Astrophysics Data System (ADS)

    Dekker, K. H.; Battista, J. J.; Jordan, K. J.

    2017-05-01

    A prototype scanning-laser fan beam optical CT scanner is reported which is capable of high resolution, large volume dosimetry with reasonable scan time. An acylindrical, asymmetric aquarium design is presented which serves to 1) generate parallel-beam scan geometry, 2) focus light towards a small acceptance angle detector, and 3) avoid interference fringe-related artifacts. Preliminary experiments with uniform solution phantoms (11 and 15 cm diameter) and finger phantoms (13.5 mm diameter FEP tubing) demonstrate that the design allows accurate optical CT imaging, with optical CT measurements agreeing within 3% of independent Beer-Lambert law calculations.

  2. Dark Radiation predictions from general Large Volume Scenarios

    NASA Astrophysics Data System (ADS)

    Hebecker, Arthur; Mangat, Patrick; Rompineve, Fabrizio; Witkowski, Lukas T.

    2014-09-01

    Recent observations constrain the amount of Dark Radiation (Δ N eff ) and may even hint towards a non-zero value of Δ N eff . It is by now well-known that this puts stringent constraints on the sequestered Large Volume Scenario (LVS), i.e. on LVS realisations with the Standard Model at a singularity. We go beyond this setting by considering LVS models where SM fields are realised on 7-branes in the geometric regime. As we argue, this naturally goes together with high-scale supersymmetry. The abundance of Dark Radiation is determined by the competition between the decay of the lightest modulus to axions, to the SM Higgs and to gauge fields, and leads to strict constraints on these models. Nevertheless, these constructions can in principle meet current DR bounds due to decays into gauge bosons alone. Further, a rather robust prediction for a substantial amount of Dark Radiation can be made. This applies both to cases where the SM 4-cycles are stabilised by D-terms and are small `by accident', i.e. tuning, as well as to fibred models with the small cycles stabilised by loops. In these constructions the DR axion and the QCD axion are the same field and we require a tuning of the initial misalignment to avoid Dark Matter overproduction. Furthermore, we analyse a closely related setting where the SM lives at a singularity but couples to the volume modulus through flavour branes. We conclude that some of the most natural LVS settings with natural values of model parameters lead to Dark Radiation predictions just below the present observational limits. Barring a discovery, rather modest improvements of present Dark Radiation bounds can rule out many of these most simple and generic variants of the LVS.

  3. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  4. Optimal whole-body PET scanner configurations for different volumes of LSO scintillator: a simulation study.

    PubMed

    Poon, Jonathan K; Dahlbom, Magnus L; Moses, William W; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R; Badawi, Ramsey D

    2012-07-07

    The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with

  5. Optimal whole-body PET scanner configurations for different volumes of LSO scintillator: a simulation study

    PubMed Central

    Poon, Jonathan K; Dahlbom, Magnus L; Moses, William W; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R; Badawi, Ramsey D

    2013-01-01

    The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15–22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging, or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 90 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20

  6. Optimal whole-body PET scanner configurations for different volumes of LSO scintillator: a simulation study

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Moses, William W.; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R.; Badawi, Ramsey D.

    2012-07-01

    The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm

  7. Large-eddy simulation of dust-uplift by a haboob density current

    NASA Astrophysics Data System (ADS)

    Huang, Qian; Marsham, John H.; Tian, Wenshou; Parker, Douglas J.; Garcia-Carreras, Luis

    2018-04-01

    Cold pool outflows have been shown from both observations and convection-permitting models to be a dominant source of dust emissions ("haboobs") in the summertime Sahel and Sahara, and to cause dust uplift over deserts across the world. In this paper Met Office Large Eddy Model (LEM) simulations, which resolve the turbulence within the cold-pools much better than previous studies of haboobs with convection-permitting models, are used to investigate the winds that uplift dust in cold pools, and the resultant dust transport. In order to simulate the cold pool outflow, an idealized cooling is added in the model during the first 2 h of 5.7 h run time. Given the short duration of the runs, dust is treated as a passive tracer. Dust uplift largely occurs in the "head" of the density current, consistent with the few existing observations. In the modeled density current dust is largely restricted to the lowest, coldest and well mixed layers of the cold pool outflow (below around 400 m), except above the "head" of the cold pool where some dust reaches 2.5 km. This rapid transport to above 2 km will contribute to long atmospheric lifetimes of large dust particles from haboobs. Decreasing the model horizontal grid-spacing from 1.0 km to 100 m resolves more turbulence, locally increasing winds, increasing mixing and reducing the propagation speed of the density current. Total accumulated dust uplift is approximately twice as large in 1.0 km runs compared with 100 m runs, suggesting that for studying haboobs in convection-permitting runs the representation of turbulence and mixing is significant. Simulations with surface sensible heat fluxes representative of those from a desert region during daytime show that increasing surface fluxes slows the density current due to increased mixing, but increase dust uplift rates, due to increased downward transport of momentum to the surface.

  8. Developing and Testing Simulated Occupational Experiences for Distributive Education Students in Rural Communities: Volume III: Training Plans: Final Report.

    ERIC Educational Resources Information Center

    Virginia Polytechnic Inst. and State Univ., Blacksburg.

    Volume 3 of a three volume final report presents prototype job training plans developed as part of a research project which pilot tested a distributive education program for rural schools utilizing a retail store simulation plan. The plans are for 15 entry-level and 15 career-level jobs in seven categories of distributive business (department…

  9. Sand waves in environmental flows: Insights gained by coupling large-eddy simulation with morphodynamics

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Khosronejad, Ali

    2016-02-01

    Sand waves arise in subaqueous and Aeolian environments as the result of the complex interaction between turbulent flows and mobile sand beds. They occur across a wide range of spatial scales, evolve at temporal scales much slower than the integral scale of the transporting turbulent flow, dominate river morphodynamics, undermine streambank stability and infrastructure during flooding, and sculpt terrestrial and extraterrestrial landscapes. In this paper, we present the vision for our work over the last ten years, which has sought to develop computational tools capable of simulating the coupled interactions of sand waves with turbulence across the broad range of relevant scales: from small-scale ripples in laboratory flumes to mega-dunes in large rivers. We review the computational advances that have enabled us to simulate the genesis and long-term evolution of arbitrarily large and complex sand dunes in turbulent flows using large-eddy simulation and summarize numerous novel physical insights derived from our simulations. Our findings explain the role of turbulent sweeps in the near-bed region as the primary mechanism for destabilizing the sand bed, show that the seeds of the emergent structure in dune fields lie in the heterogeneity of the turbulence and bed shear stress fluctuations over the initially flatbed, and elucidate how large dunes at equilibrium give rise to energetic coherent structures and modify the spectra of turbulence. We also discuss future challenges and our vision for advancing a data-driven simulation-based engineering science approach for site-specific simulations of river flooding.

  10. Correlating Free-Volume Hole Distribution to the Glass Transition Temperature of Epoxy Polymers.

    PubMed

    Aramoon, Amin; Breitzman, Timothy D; Woodward, Christopher; El-Awady, Jaafar A

    2017-09-07

    A new algorithm is developed to quantify the free-volume hole distribution and its evolution in coarse-grained molecular dynamics simulations of polymeric networks. This is achieved by analyzing the geometry of the network rather than a voxelized image of the structure to accurately and efficiently find and quantify free-volume hole distributions within large scale simulations of polymer networks. The free-volume holes are quantified by fitting the largest ellipsoids and spheres in the free-volumes between polymer chains. The free-volume hole distributions calculated from this algorithm are shown to be in excellent agreement with those measured from positron annihilation lifetime spectroscopy (PALS) experiments at different temperature and pressures. Based on the results predicted using this algorithm, an evolution model is proposed for the thermal behavior of an individual free-volume hole. This model is calibrated such that the average radius of free-volumes holes mimics the one predicted from the simulations. The model is then employed to predict the glass-transition temperature of epoxy polymers with different degrees of cross-linking and lengths of prepolymers. Comparison between the predicted glass-transition temperatures and those measured from simulations or experiments implies that this model is capable of successfully predicting the glass-transition temperature of the material using only a PDF of the initial free-volume holes radii of each microstructure. This provides an effective approach for the optimized design of polymeric systems on the basis of the glass-transition temperature, degree of cross-linking, and average length of prepolymers.

  11. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  12. Homogeneous SPC/E water nucleation in large molecular dynamics simulations.

    PubMed

    Angélil, Raymond; Diemand, Jürg; Tanaka, Kyoko K; Tanaka, Hidekazu

    2015-08-14

    We perform direct large molecular dynamics simulations of homogeneous SPC/E water nucleation, using up to ∼ 4 ⋅ 10(6) molecules. Our large system sizes allow us to measure extremely low and accurate nucleation rates, down to ∼ 10(19) cm(-3) s(-1), helping close the gap between experimentally measured rates ∼ 10(17) cm(-3) s(-1). We are also able to precisely measure size distributions, sticking efficiencies, cluster temperatures, and cluster internal densities. We introduce a new functional form to implement the Yasuoka-Matsumoto nucleation rate measurement technique (threshold method). Comparison to nucleation models shows that classical nucleation theory over-estimates nucleation rates by a few orders of magnitude. The semi-phenomenological nucleation model does better, under-predicting rates by at worst a factor of 24. Unlike what has been observed in Lennard-Jones simulations, post-critical clusters have temperatures consistent with the run average temperature. Also, we observe that post-critical clusters have densities very slightly higher, ∼ 5%, than bulk liquid. We re-calibrate a Hale-type J vs. S scaling relation using both experimental and simulation data, finding remarkable consistency in over 30 orders of magnitude in the nucleation rate range and 180 K in the temperature range.

  13. Atmospheric stability effects on wind farm performance using large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Archer, C. L.; Ghaisas, N.; Xie, S.

    2014-12-01

    Atmospheric stability has been recently found to have significant impacts on wind farm performance, especially since offshore and onshore wind farms are known to operate often under non-neutral conditions. Recent field observations have revealed that changes in stability are accompanied by changes in wind speed, direction, and turbulent kinetic energy (TKE). In order to isolate the effects of stability, large-eddy simulations (LES) are performed under neutral, stable, and unstable conditions, keeping the wind speed and direction unchanged at a fixed height. The Lillgrund wind farm, comprising of 48 turbines, is studied in this research with the Simulator for Offshore/Onshore Wind Farm Applications (SOWFA) developed by the National Renewable Energy Laboratory. Unlike most previous numerical simulations, this study does not impose periodic boundary conditions and therefore is ideal for evaluating the effects of stability in large, but finite, wind farms. Changes in power generation, velocity deficit, rate of wake recovery, TKE, and surface temperature are quantified as a function of atmospheric stability. The sensitivity of these results to wind direction is also discussed.

  14. Simulation of Long-Term Landscape-Level Fuel Treatment Effects on Large Wildfires

    Treesearch

    Mark A. Finney; Rob C. Seli; Charles W. McHugh; Alan A. Ager; Berni Bahro; James K. Agee

    2006-01-01

    A simulation system was developed to explore how fuel treatments placed in random and optimal spatial patterns affect the growth and behavior of large fires when implemented at different rates over the course of five decades. The system consists of a forest/fuel dynamics simulation module (FVS), logic for deriving fuel model dynamics from FVS output, a spatial fuel...

  15. F-16XL Hybrid Reynolds-Averaged Navier-Stokes/Large Eddy Simulation on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Abdol-Hamid, Khaled S.; Elmiligui, Alaa

    2015-01-01

    This study continues the Cranked Arrow Wing Aerodynamics Program, International (CAWAPI) investigation with the FUN3D and USM3D flow solvers. CAWAPI was established to study the F-16XL, because it provides a unique opportunity to fuse fight test, wind tunnel test, and simulation to understand the aerodynamic features of swept wings. The high-lift performance of the cranked-arrow wing planform is critical for recent and past supersonic transport design concepts. Simulations of the low speed high angle of attack Flight Condition 25 are compared: Detached Eddy Simulation (DES), Modi ed Delayed Detached Eddy Simulation (MDDES), and the Spalart-Allmaras (SA) RANS model. Iso- surfaces of Q criterion show the development of coherent primary and secondary vortices on the upper surface of the wing that spiral, burst, and commingle. SA produces higher pressure peaks nearer to the leading-edge of the wing than flight test measurements. Mean DES and MDDES pressures better predict the flight test measurements, especially on the outer wing section. Vorticies and vortex-vortex interaction impact unsteady surface pressures. USM3D showed many sharp tones in volume points spectra near the wing apex with low broadband noise and FUN3D showed more broadband noise with weaker tones. Spectra of the volume points near the outer wing leading-edge was primarily broadband for both codes. Without unsteady flight measurements, the flight pressure environment can not be used to validate the simulations containing tonal or broadband spectra. Mean forces and moment are very similar between FUN3D models and between USM3D models. Spectra of the unsteady forces and moment are broadband with a few sharp peaks for USM3D.

  16. Large CSF Volume Not Attributable to Ventricular Volume in Schizotypal Personality Disorder

    PubMed Central

    Dickey, Chandlee C.; Shenton, Martha E.; Hirayasu, Yoshio; Fischer, Iris; Voglmaier, Martina M.; Niznikiewicz, Margaret A.; Seidman, Larry J.; Fraone, Stephanie; McCarley, Robert W.

    2010-01-01

    Objective The purpose of this study was to determine whether schizotypal personality disorder, which has the same genetic diathesis as schizophrenia, manifests abnormalities in whole-brain and CSF volumes. Method Sixteen right-handed and neuroleptic-naive men with schizotypal personality disorder were recruited from the community and were age-matched to 14 healthy comparison subjects. Magnetic resonance images were obtained from the subjects and automatically parcellated into CSF, gray matter, and white matter. Subsequent manual editing separated cortical from noncortical gray matter. Lateral ventricles and temporal horns were also delineated. Results The men with schizotypal personality disorder had larger CSF volumes than the comparison subjects; the difference was not attributable to larger lateral ventricles. The cortical gray matter was somewhat smaller in the men with schizotypal personality disorder, but the difference was not statistically significant. Conclusions Consistent with many studies of schizophrenia, this examination of schizotypal personality disorder indicated abnormalities in brain CSF volumes. PMID:10618012

  17. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  18. An effective online data monitoring and saving strategy for large-scale climate simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin

    Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less

  19. An effective online data monitoring and saving strategy for large-scale climate simulations

    DOE PAGES

    Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin; ...

    2018-01-22

    Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less

  20. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles; Moin, Parviz

    2002-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.

  1. Large-scale 3D simulations of ICF and HEDP targets

    NASA Astrophysics Data System (ADS)

    Marinak, Michael M.

    2000-10-01

    The radiation hydrodynamics code HYDRA continues to be developed and applied to 3D simulations of a variety of targets for both inertial confinement fusion (ICF) and high energy density physics. Several packages have been added enabling this code to perform ICF target simulations with similar accuracy as two-dimensional codes of long-time historical use. These include a laser ray trace and deposition package, a heavy ion deposition package, implicit Monte Carlo photonics, and non-LTE opacities, derived from XSN or the linearized response matrix approach.(R. More, T. Kato, Phys. Rev. Lett. 81, 814 (1998), S. Libby, F. Graziani, R. More, T. Kato, Proceedings of the 13th International Conference on Laser Interactions and Related Plasma Phenomena, (AIP, New York, 1997).) LTE opacities can also be calculated for arbitrary mixtures online by combining tabular values generated by different opacity codes. Thermonuclear burn, charged particle transport, neutron energy deposition, electron-ion coupling and conduction, and multigroup radiation diffusion packages are also installed. HYDRA can employ ALE hydrodynamics; a number of grid motion algorithms are available. Multi-material flows are resolved using material interface reconstruction. Results from large-scale simulations run on up to 1680 processors, using a combination of massively parallel processing and symmetric multiprocessing, will be described. A large solid angle simulation of Rayleigh-Taylor instability growth in a NIF ignition capsule has resolved simultaneously the full spectrum of the most dangerous modes that grow from surface roughness. Simulations of a NIF hohlraum illuminated with the initial 96 beam configuration have also been performed. The effect of the hohlraum’s 3D intrinsic drive asymmetry on the capsule implosion will be considered. We will also discuss results from a Nova experiment in which a copper sphere is crushed by a planar shock. Several interacting hydrodynamic instabilities, including

  2. High sulfur loading cathodes fabricated using peapodlike, large pore volume mesoporous carbon for lithium-sulfur battery.

    PubMed

    Li, Duo; Han, Fei; Wang, Shuai; Cheng, Fei; Sun, Qiang; Li, Wen-Cui

    2013-03-01

    Porous carbon materials with large pore volume are crucial in loading insulated sulfur with the purpose of achieving high performance for lithium-sulfur batteries. In our study, peapodlike mesoporous carbon with interconnected pore channels and large pore volume (4.69 cm(3) g(-1)) was synthesized and used as the matrix to fabricate carbon/sulfur (C/S) composite which served as attractive cathodes for lithium-sulfur batteries. Systematic investigation of the C/S composite reveals that the carbon matrix can hold a high but suitable sulfur loading of 84 wt %, which is beneficial for improving the bulk density in practical application. Such controllable sulfur-filling also effectively allows the volume expansion of active sulfur during Li(+) insertion. Moreover, the thin carbon walls (3-4 nm) of carbon matrix not only are able to shorten the pathway of Li(+) transfer and conduct electron to overcome the poor kinetics of sulfur cathode, but also are flexible to warrant structure stability. Importantly, the peapodlike carbon shell is beneficial to increase the electrical contact for improving electronic conductivity of active sulfur. Meanwhile, polymer modification with polypyrrole coating layer further restrains polysulfides dissolution and improves the cycle stability of carbon/sulfur composites.

  3. Striped Bass, morone saxatilis, egg incubation in large volume jars

    USGS Publications Warehouse

    Harper, C.J.; Wrege, B.M.; Jeffery, Isely J.

    2010-01-01

    The standard McDonald jar was compared with a large volume jar for striped bass, Morone saxatilis, egg incubation. The McDonald jar measured 16 cm in diameter by 45 cm in height and had a volume of 6 L. The experimental jar measured 0.4 m in diameter by 1.3 m in height and had a volume of 200 L. The hypothesis is that there is no difference in percent survival of fry hatched in experimental jars compared with McDonald jars. Striped bass brood fish were collected from the Coosa River and spawned using the dry spawn method of fertilization. Four McDonald jars were stocked with approximately 150 g of eggs each. Post-hatch survival was estimated at 48, 96, and 144 h. Stocking rates resulted in an average egg loading rate (??1 SE) in McDonald jars of 21.9 ?? 0.03 eggs/mL and in experimental jars of 10.9 ?? 0.57 eggs/mL. The major finding of this study was that average fry survival was 37.3 ?? 4.49% for McDonald jars and 34.2 ?? 3.80% for experimental jars. Although survival in experimental jars was slightly less than in McDonald jars, the effect of container volume on survival to 48 h (F = 6.57; df = 1,5; P > 0.05), 96 h (F = 0.02; df = 1, 4; P > 0.89), and 144 h (F = 3.50; df = 1, 4; P > 0.13) was not statistically significant. Mean survival between replicates ranged from 14.7 to 60.1% in McDonald jars and from 10.1 to 54.4% in experimental jars. No effect of initial stocking rate on survival (t = 0.06; df = 10; P > 0.95) was detected. Experimental jars allowed for incubation of a greater number of eggs in less than half the floor space of McDonald jars. As hatchery production is often limited by space or water supply, experimental jars offer an alternative to extending spawning activities, thereby reducing labor and operations cost. As survival was similar to McDonald jars, the experimental jar is suitable for striped bass egg incubation. ?? Copyright by the World Aquaculture Society 2010.

  4. Exposing earth surface process model simulations to a large audience

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical

  5. Forced transport of thermal energy in magmatic and phreatomagmatic large volume ignimbrites: Paleomagnetic evidence from the Colli Albani volcano, Italy

    NASA Astrophysics Data System (ADS)

    Trolese, Matteo; Giordano, Guido; Cifelli, Francesca; Winkler, Aldo; Mattei, Massimo

    2017-11-01

    Few studies have detailed the thermal architecture of large-volume pyroclastic density current deposits, although such work has a clear importance for understanding the dynamics of eruptions of this magnitude. Here we examine the temperature of emplacement of large-volume caldera-forming ignimbrites related to magmatic and phreatomagmatic eruptions at the Colli Albani volcano, Italy, by using thermal remanent magnetization analysis on both lithic and juvenile clasts. Results show that all the magmatic ignimbrites were deposited at high temperature, between the maximum blocking temperature of the magnetic carrier (600-630 °C) and the glass transition temperature (about 710 °C). Temperature estimations for the phreatomagmatic ignimbrite range between 200 and 400 °C, with most of the clasts emplaced between 200 and 320 °C. Because all the investigated ignimbrites, magmatic and phreatomagmatic, share similar magma composition, volume and mobility, we attribute the temperature difference to magma-water interaction, highlighting its pronounced impact on thermal dissipation, even in large-volume eruptions. The homogeneity of the deposit temperature of each ignimbrite across its areal extent, which is maintained across topographic barriers, suggests that these systems are thermodynamically isolated from the external environment for several tens of kilometers. Based on these findings, we propose that these large-volume ignimbrites are dominated by the mass flux, which forces the lateral transport of mass, momentum, and thermal energy for distances up to tens of kilometers away from the vent. We conclude that spatial variation of the emplacement temperature can be used as a proxy for determining the degree of forced-convection flow.

  6. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  7. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.

    1993-01-01

    The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.

  8. Recombinant factor VIIa as an adjunctive therapy for patients requiring large volume transfusion: a pharmacoeconomic evaluation.

    PubMed

    Loudon, B; Smith, M P

    2005-08-01

    Acute haemorrhage requiring large volume transfusion presents a costly and unpredictable risk to transfusion services. Recombinant factor VIIa (rFVIIa) (NovoSeven, Novo Nordisk, Bagsvaard, Denmark) may provide an important adjunctive haemostatic strategy for the management of patients requiring large volume blood transfusions. To review blood transfusion over a 12-month period and assess the major costs associated with haemorrhage management. A pharmoeconomic evaluation of rFVIIa intervention for large volume transfusion was conducted to identify the most cost-effective strategy for using this haemostatic product. Audit and analysis of all patients admitted to Christchurch Public Hospital requiring > 5 units of red blood cells (RBC) during a single transfusion episode. Patients were stratified into groups dependent on RBC units received and further stratified with regard to ward category. Cumulative costs were derived to compare standard treatment with an hypothesized rFVIIa intervention for each transfusion group. Sensitivity analyses were performed by varying parameters and comparing to original outcomes. Comparison of costs between the standard and hypothetical model indicated no statistically significant differences between groups (P < 0.05). Univariate and multivariate sensitivity analyses indicate that intervention with rFVIIa after transfusion of 14 RBC units may be cost-effective due to conservation of blood components and reduction in duration of intensive area stay. Intervention with rFVIIa for haemorrhage control is most cost-effective relatively early in the RBC transfusion period. Our hypothetical model indicates the optimal time point is when 14 RBC units have been transfused.

  9. Large Eddy Simulation of Ducted Propulsors in Crashback

    NASA Astrophysics Data System (ADS)

    Jang, Hyunchul; Mahesh, Krishnan

    2009-11-01

    Flow around a ducted marine propulsor is computed using the large eddy simulation methodology under crashback conditions. Crashback is an operating condition where a propulsor rotates in the reverse direction while the vessel moves in the forward direction. It is characterized by massive flow separation and highly unsteady propeller loads, which affect both blade life and maneuverability. The simulations are performed on unstructured grids using the discrete kinetic energy conserving algorithm developed by Mahesh at al. (2004, J. Comput. Phys 197). Numerical challenges posed by sharp blade edges and small blade tip clearances are discussed. The flow is computed at the advance ratio J=-0.7 and Reynolds number Re=480,000 based on the propeller diameter. Average and RMS values of the unsteady loads such as thrust, torque, and side force on the blades and duct are compared to experiment, and the effect of the duct on crashback is discussed.

  10. A large-eddy simulation based power estimation capability for wind farms over complex terrain

    NASA Astrophysics Data System (ADS)

    Senocak, I.; Sandusky, M.; Deleon, R.

    2017-12-01

    There has been an increasing interest in predicting wind fields over complex terrain at the micro-scale for resource assessment, turbine siting, and power forecasting. These capabilities are made possible by advancements in computational speed from a new generation of computing hardware, numerical methods and physics modelling. The micro-scale wind prediction model presented in this work is based on the large-eddy simulation paradigm with surface-stress parameterization. The complex terrain is represented using an immersed-boundary method that takes into account the parameterization of the surface stresses. Governing equations of incompressible fluid flow are solved using a projection method with second-order accurate schemes in space and time. We use actuator disk models with rotation to simulate the influence of turbines on the wind field. Data regarding power production from individual turbines are mostly restricted because of proprietary nature of the wind energy business. Most studies report percentage drop of power relative to power from the first row. There have been different approaches to predict power production. Some studies simply report available wind power in the upstream, some studies estimate power production using power curves available from turbine manufacturers, and some studies estimate power as torque multiplied by rotational speed. In the present work, we propose a black-box approach that considers a control volume around a turbine and estimate the power extracted from the turbine based on the conservation of energy principle. We applied our wind power prediction capability to wind farms over flat terrain such as the wind farm over Mower County, Minnesota and the Horns Rev offshore wind farm in Denmark. The results from these simulations are in good agreement with published data. We also estimate power production from a hypothetical wind farm in complex terrain region and identify potential zones suitable for wind power production.

  11. Plasmonic resonances of nanoparticles from large-scale quantum mechanical simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Xiang, Hongping; Zhang, Mingliang; Lu, Gang

    2017-09-01

    Plasmonic resonance of metallic nanoparticles results from coherent motion of its conduction electrons, driven by incident light. For the nanoparticles less than 10 nm in diameter, localized surface plasmonic resonances become sensitive to the quantum nature of the conduction electrons. Unfortunately, quantum mechanical simulations based on time-dependent Kohn-Sham density functional theory are computationally too expensive to tackle metal particles larger than 2 nm. Herein, we introduce the recently developed time-dependent orbital-free density functional theory (TD-OFDFT) approach which enables large-scale quantum mechanical simulations of plasmonic responses of metallic nanostructures. Using TD-OFDFT, we have performed quantum mechanical simulations to understand size-dependent plasmonic response of Na nanoparticles and plasmonic responses in Na nanoparticle dimers and trimers. An outlook of future development of the TD-OFDFT method is also presented.

  12. Large Eddy Simulation of Engineering Flows: A Bill Reynolds Legacy.

    NASA Astrophysics Data System (ADS)

    Moin, Parviz

    2004-11-01

    The term, Large eddy simulation, LES, was coined by Bill Reynolds, thirty years ago when he and his colleagues pioneered the introduction of LES in the engineering community. Bill's legacy in LES features his insistence on having a proper mathematical definition of the large scale field independent of the numerical method used, and his vision for using numerical simulation output as data for research in turbulence physics and modeling, just as one would think of using experimental data. However, as an engineer, Bill was pre-dominantly interested in the predictive capability of computational fluid dynamics and in particular LES. In this talk I will present the state of the art in large eddy simulation of complex engineering flows. Most of this technology has been developed in the Department of Energy's ASCI Program at Stanford which was led by Bill in the last years of his distinguished career. At the core of this technology is a fully implicit non-dissipative LES code which uses unstructured grids with arbitrary elements. A hybrid Eulerian/ Largangian approach is used for multi-phase flows, and chemical reactions are introduced through dynamic equations for mixture fraction and reaction progress variable in conjunction with flamelet tables. The predictive capability of LES is demonstrated in several validation studies in flows with complex physics and complex geometry including flow in the combustor of a modern aircraft engine. LES in such a complex application is only possible through efficient utilization of modern parallel super-computers which was recognized and emphasized by Bill from the beginning. The presentation will include a brief mention of computer science efforts for efficient implementation of LES.

  13. Physiologic volume of phosphorus during hemodialysis: predictions from a pseudo one-compartment model.

    PubMed

    Leypoldt, John K; Akonur, Alp; Agar, Baris U; Culleton, Bruce F

    2012-10-01

    The kinetics of plasma phosphorus concentrations during hemodialysis (HD) are complex and cannot be described by conventional one- or two-compartment kinetic models. It has recently been shown by others that the physiologic (or apparent distribution) volume for phosphorus (Vr-P) increases with increasing treatment time and shows a large variation among patients treated by thrice weekly and daily HD. Here, we describe the dependence of Vr-P on treatment time and predialysis plasma phosphorus concentration as predicted by a novel pseudo one-compartment model. The kinetics of plasma phosphorus during conventional and six times per week daily HD were simulated as a function of treatment time per session for various dialyzer phosphate clearances and patient-specific phosphorus mobilization clearances (K(M)). Vr-P normalized to extracellular volume from these simulations were reported and compared with previously published empirical findings. Simulated results were relatively independent of dialyzer phosphate clearance and treatment frequency. In contrast, Vr-P was strongly dependent on treatment time per session; the increase in Vr-P with treatment time was larger for higher values of K(M). Vr-P was inversely dependent on predialysis plasma phosphorus concentration. There was significant variation among predicted Vr-P values, depending largely on the value of K(M). We conclude that a pseudo one-compartment model can describe the empirical dependence of the physiologic volume of phosphorus on treatment time and predialysis plasma phosphorus concentration. Further, the variation in physiologic volume of phosphorus among HD patients is largely due to differences in patient-specific phosphorus mobilization clearance. © 2012 The Authors. Hemodialysis International © 2012 International Society for Hemodialysis.

  14. Large-Eddy/Lattice Boltzmann Simulations of Micro-blowing Strategies for Subsonic and Supersonic Drag Control

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    2003-01-01

    This report summarizes the progress made in the first 8 to 9 months of this research. The Lattice Boltzmann Equation (LBE) methodology for Large-eddy Simulations (LES) of microblowing has been validated using a jet-in-crossflow test configuration. In this study, the flow intake is also simulated to allow the interaction to occur naturally. The Lattice Boltzmann Equation Large-eddy Simulations (LBELES) approach is capable of capturing not only the flow features associated with the flow, such as hairpin vortices and recirculation behind the jet, but also is able to show better agreement with experiments when compared to previous RANS predictions. The LBELES is shown to be computationally very efficient and therefore, a viable method for simulating the injection process. Two strategies have been developed to simulate multi-hole injection process as in the experiment. In order to allow natural interaction between the injected fluid and the primary stream, the flow intakes for all the holes have to be simulated. The LBE method is computationally efficient but is still 3D in nature and therefore, there may be some computational penalty. In order to study a large number or holes, a new 1D subgrid model has been developed that will simulate a reduced form of the Navier-Stokes equation in these holes.

  15. Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) Simulations of the Molecular Crystal alphaRDX

    DTIC Science & Technology

    2013-08-01

    potential for HMX / RDX (3, 9). ...................................................................................8 1 1. Purpose This work...6 dispersion and electrostatic interactions. Constants for the SB potential are given in table 1. 8 Table 1. SB potential for HMX / RDX (3, 9...modeling dislocations in the energetic molecular crystal RDX using the Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) molecular

  16. Large eddy simulation of dust-uplift by haboob density currents

    NASA Astrophysics Data System (ADS)

    Huang, Q.

    2017-12-01

    Cold pool outflows have been shown from both observations and convection-permitting models to be a dominant source of dust uplift ("haboobs") in the summertime Sahel and Sahara, and to cause dust uplift over deserts across the world. In this paper large eddy model (LEM) simulations, which resolve the turbulence within the cold-pools much better than previous studies of haboobs which have used convection-permitting models, are used to investigate the winds that cause dust uplift in cold pools, and the resultant dust uplift and transport. Dust uplift largely occurs in the head of the density current, consistent with the few existing observations. In the modeled density current dust is largely restricted to the lowest coldest and well mixed layer of the cold pool outflow (below around 400 m), except above the head of the cold pool where some dust reaches 2.5 km. This rapid transport to high altitude will contribute to long atmospheric lifetimes of large dust particles from haboobs. Decreasing the model horizontal grid-spacing from 1.0 km to 100 m resolves more turbulence, locally increasing winds, increasing mixing and reducing the propagation speed of the density current. Total accumulated dust uplift is approximately twice as large in 1.0 km runs compared with 100 m runs, suggesting that for studying haboobs in convection-permitting runs the representation of turbulence and mixing is significant. Simulations with surface sensible heat fluxes representative of those from a desert region in daytime show that increasing surface fluxes slow the density current due to increased mixing, but increase dust uplift rates, due to increased downward transport of momentum to the surface.

  17. Volume I: Select Papers

    DTIC Science & Technology

    2010-08-01

    Pressurization Simulations ....................................................................................18  3.2  NVT Uniaxial Strain... Simulations .................................................................................26  3.3  Stacking Mismatch Simulations ...13  Figure 2. Pressure versus normalized volume. Circles are simulation results

  18. Large eddy simulation of cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, Aswin; Mahesh, Krishnan

    2014-11-01

    Large eddy simulation on unstructured grids is used to study hydrodynamic cavitation. The multiphase medium is represented using a homogeneous equilibrium model that assumes thermal equilibrium between the liquid and the vapor phase. Surface tension effects are ignored and the governing equations are the compressible Navier Stokes equations for the liquid/vapor mixture along with a transport equation for the vapor mass fraction. A characteristic-based filtering scheme is developed to handle shocks and material discontinuities in non-ideal gases and mixtures. A TVD filter is applied as a corrector step in a predictor-corrector approach with the predictor scheme being non-dissipative and symmetric. The method is validated for canonical one dimensional flows and leading edge cavitation over a hydrofoil, and applied to study sheet to cloud cavitation over a wedge. This work is supported by the Office of Naval Research.

  19. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    NASA Astrophysics Data System (ADS)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  20. Large-eddy simulations of the restricted nonlinear system

    NASA Astrophysics Data System (ADS)

    Bretheim, Joel; Gayme, Dennice; Meneveau, Charles

    2014-11-01

    Wall-bounded shear flows often exhibit elongated flow structures with streamwise coherence (e.g. rolls/streaks), prompting the exploration of a streamwise-constant modeling framework to investigate wall-turbulence. Simulations of a streamwise-constant (2D/3C) model have been shown to produce the roll/streak structures and accurately reproduce the blunted turbulent mean velocity profile in plane Couette flow. The related restricted nonlinear (RNL) model captures these same features but also exhibits self-sustaining turbulent behavior. Direct numerical simulation (DNS) of the RNL system results in similar statistics for a number of flow quantities and a flow field that is consistent with DNS of the Navier-Stokes equations. Aiming to develop reduced-order models of wall-bounded turbulence at very high Reynolds numbers in which viscous near-wall dynamics cannot be resolved, this work presents the development of an RNL formulation of the filtered Navier-Stokes equations solved for in large-eddy simulations (LES). The proposed LES-RNL system is a computationally affordable reduced-order modeling tool that is of interest for studying the underlying dynamics of high-Reynolds wall-turbulence and for engineering applications where the flow field is dominated by streamwise-coherent motions. This work is supported by NSF (IGERT, SEP-1230788 and IIA-1243482).

  1. Large Eddy Simulations of Colorless Distributed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Abdulrahman, Husam F.; Jaberi, Farhad; Gupta, Ashwani

    2014-11-01

    Development of efficient and low-emission colorless distributed combustion (CDC) systems for gas turbine applications require careful examination of the role of various flow and combustion parameters. Numerical simulations of CDC in a laboratory-scale combustor have been conducted to carefully examine the effects of these parameters on the CDC. The computational model is based on a hybrid modeling approach combining large eddy simulation (LES) with the filtered mass density function (FMDF) equations, solved with high order numerical methods and complex chemical kinetics. The simulated combustor operates based on the principle of high temperature air combustion (HiTAC) and has shown to significantly reduce the NOx, and CO emissions while improving the reaction pattern factor and stability without using any flame stabilizer and with low pressure drop and noise. The focus of the current work is to investigate the mixing of air and hydrocarbon fuels and the non-premixed and premixed reactions within the combustor by the LES/FMDF with the reduced chemical kinetic mechanisms for the same flow conditions and configurations investigated experimentally. The main goal is to develop better CDC with higher mixing and efficiency, ultra-low emission levels and optimum residence time. The computational results establish the consistency and the reliability of LES/FMDF and its Lagrangian-Eulerian numerical methodology.

  2. Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions

    NASA Technical Reports Server (NTRS)

    Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William

    2005-01-01

    As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.

  3. Dynamic subfilter-scale stress model for large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Rouhi, A.; Piomelli, U.; Geurts, B. J.

    2016-08-01

    We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.

  4. Simulated Performances of a Very High Energy Tomograph for Non-Destructive Characterization of large objects

    NASA Astrophysics Data System (ADS)

    Kistler, Marc; Estre, Nicolas; Merle, Elsa

    2018-01-01

    As part of its R&D activities on high-energy X-ray imaging for non-destructive characterization, the Nuclear Measurement Laboratory has started an upgrade of its imaging system currently implemented at the CEA-Cadarache center. The goals are to achieve a sub-millimeter spatial resolution and the ability to perform tomographies on very large objects (more than 100-cm standard concrete or 40-cm steel). This paper presentsresults on the detection part of the imaging system. The upgrade of the detection part needs a thorough study of the performance of two detectors: a series of CdTe semiconductor sensors and two arrays of segmented CdWO4 scintillators with different pixel sizes. This study consists in a Quantum Accounting Diagram (QAD) analysis coupled with Monte-Carlo simulations. The scintillator arrays are able to detect millimeter details through 140 cm of concrete, but are limited to 120 cm for smaller ones. CdTe sensors have lower but more stable performance, with a 0.5 mm resolution for 90 cm of concrete. The choice of the detector then depends on the preferred characteristic: the spatial resolution or the use on large volumes. The combination of the features of the source and the studies on the detectors gives the expected performance of the whole equipment, in terms of signal-over-noise ratio (SNR), spatial resolution and acquisition time.

  5. Large Eddy Simulations of Transverse Combustion Instability in a Multi-Element Injector

    DTIC Science & Technology

    2016-07-27

    plagued the development of liquid rocket engines and remains a large riskin the development and acquisition of new liquid rocket engines. Combustion...simulations to better understand the physics that can lead combustion instability in liquid rocket engines. Simulations of this type are able to...instabilities found in liquid rocket engines are transverse. The motivating of the experiment behind the current work is to subject the CVRC injector

  6. Lightweight computational steering of very large scale molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less

  7. Perturbative two- and three-loop coefficients from large β Monte Carlo

    NASA Astrophysics Data System (ADS)

    Lepage, G. P.; Mackenzie, P. B.; Shakespeare, N. H.; Trottier, H. D.

    Perturbative coefficients for Wilson loops and the static quark self-energy are extracted from Monte Carlo simulations at large β on finite volumes, where all the lattice momenta are large. The Monte Carlo results are in excellent agreement with perturbation theory through second order. New results for third order coefficients are reported. Twisted boundary conditions are used to eliminate zero modes and to suppress Z3 tunneling.

  8. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple

  9. Studying Turbulence Using Numerical Simulation Databases - X Proceedings of the 2004 Summer Program

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Mansour, Nagi N.

    2004-01-01

    This Proceedings volume contains 32 papers that span a wide range of topics that reflect the ubiquity of turbulence. The papers have been divided into six groups: 1) Solar Simulations; 2) Magnetohydrodynamics (MHD); 3) Large Eddy Simulation (LES) and Numerical Simulations; 4) Reynolds Averaged Navier Stokes (RANS) Modeling and Simulations; 5) Stability and Acoustics; 6) Combustion and Multi-Phase Flow.

  10. Simulating Cosmic Reionization and Its Observable Consequences

    NASA Astrophysics Data System (ADS)

    Shapiro, Paul

    2017-01-01

    I summarize recent progress in modelling the epoch of reionization by large- scale simulations of cosmic structure formation, radiative transfer and their interplay, which trace the ionization fronts that swept across the IGM, to predict observable signatures. Reionization by starlight from early galaxies affected their evolution, impacting reionization, itself, and imprinting the galaxies with a memory of reionization. Star formation suppression, e.g., may explain the observed underabundance of Local Group dwarfs relative to N-body predictions for Cold Dark Matter. I describe CoDa (''Cosmic Dawn''), the first fully-coupled radiation-hydrodynamical simulation of reionization and galaxy formation in the Local Universe, in a volume large enough to model reionization globally but with enough resolving power to follow all the atomic-cooling galactic halos in that volume. A 90 Mpc box was simulated from a constrained realization of primordial fluctuations, chosen to reproduce present-day features of the Local Group, including the Milky Way and M31, and the local universe beyond, including the Virgo cluster. The new RAMSES-CUDATON hybrid CPU-GPU code took 11 days to perform this simulation on the Titan supercomputer at Oak Ridge National Laboratory, with 4096-cubed N-body particles for the dark matter and 4096-cubed cells for the atomic gas and ionizing radiation.

  11. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  12. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, William S.; Bull, Jeffrey S.; Wilcox, Trevor

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, andmore » diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.« less

  13. Large space telescope, phase A. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Phase A study of the Large Space Telescope (LST) is reported. The study defines an LST concept based on the broad mission guidelines provided by the Office of Space Science (OSS), the scientific requirements developed by OSS with the scientific community, and an understanding of long range NASA planning current at the time the study was performed. The LST is an unmanned astronomical observatory facility, consisting of an optical telescope assembly (OTA), scientific instrument package (SIP), and a support systems module (SSM). The report consists of five volumes. The report describes the constraints and trade off analyses that were performed to arrive at a reference design for each system and for the overall LST configuration. A low cost design approach was followed in the Phase A study. This resulted in the use of standard spacecraft hardware, the provision for maintenance at the black box level, growth potential in systems designs, and the sharing of shuttle maintenance flights with other payloads.

  14. Changes in lung volumes and gas trapping in patients with large hiatal hernia.

    PubMed

    Naoum, Christopher; Kritharides, Leonard; Ing, Alvin; Falk, Gregory L; Yiannikas, John

    2017-03-01

    Studies assessing hiatal hernia (HH)-related effects on lung volumes derived by body plethysmography are limited. We aimed to evaluate the effect of hernia size on lung volumes (including assessment by body plethysmography) and the relationship to functional capacity, as well as the impact of corrective surgery. Seventy-three patients (70 ± 10 years; 54 female) with large HH [mean ± standard deviation, intra-thoracic stomach (ITS) (%): 63 ± 20%; type III in 65/73] had respiratory function data (spirometry, 73/73; body plethysmography, 64/73; diffusing capacity, 71/73) and underwent HH surgery. Respiratory function was analysed in relation to hernia size (groups I, II and III: ≤50, 50%-75% and ≥75% ITS, respectively) and functional capacity. Post-operative changes were quantified in a subgroup. Total lung capacity (TLC) and vital capacity (VC) correlated inversely with hernia size (TLC: 97 ± 11%, 96 ± 13%, 88 ± 10% predicted in groups I, II and III, respectively, P = 0.01; VC: 110 ± 17%, 111 ± 14%, 98 ± 14% predicted, P = 0.02); however, mean values were normal and only 14% had abnormal lung volumes. Surgery increased TLC (93 ± 11% vs 97 ± 10% predicted) and VC (105 ± 15% vs 116 ± 18%), and decreased residual volume/total lung capacity (RV/TLC) ratio (39 ± 7% vs 37 ± 6%) (P < 0.01 for all). Respiratory changes were modest relative to the marked functional class improvement. Among parameters that improved following HH surgery, decreased TLC and forced expiratory volume in 1 s and increased RV/TLC ratio correlated with poorer functional class pre-operatively. Increasing HH size correlates with reduced TLC and VC. Surgery improves lung volumes and gas trapping; however, the changes are mild and within the normal range. © 2015 John Wiley & Sons Ltd.

  15. The Long-Term Clinical Outcomes Following Autogenous Bone Grafting for Large-Volume Defects of the Knee

    PubMed Central

    Delano, Mark; Spector, Myron; Pittsley, Andrew; Gottschalk, Alexander

    2014-01-01

    Objective: We report the long-term clinical outcomes of patients who underwent autogenous bone grafting of large-volume osteochondral defects of the knee due to osteochondritis dessicans (OCD) and osteonecrosis (ON). This is the companion report to one previous published on the biological response. We hypothesized that these grafts would integrate with host bone and the articular surface would form fibrocartilage providing an enduring clinical benefit. Design: Three groups (patients/knees) were studied: OCD without a fragment (n = 12/13), OCD with a partial fragment (n = 14/16), and ON (n = 25/26). Twenty-five of 52 patients were available for clinical follow-up between 12 and 21 years. Electronic medical records provided comparison clinical information. In addition, there were plain film radiographs, MRIs, plus repeat arthroscopy and biopsy on 14 patients. Results: Autogenous bone grafts integrated with the host bone. MRI showed soft tissue covering all the grafts at long-term follow-up. Biopsy showed initial surface fibrocartilage that subsequently converted to fibrocartilage and hyaline cartilage at 20 years. OCD patients had better clinical outcomes than ON patients. No OCD patients were asymptomatic at anytime following surgery. Half of the ON patients came to total knee replacement within 10 years. Conclusions: Autogenous bone grafting provides an alternative biological matrix to fill large-volume defects in the knee as a singular solution integrating with host bone and providing an enduring articular cartilage surface. The procedure is best suited for those with OCD. The treatment for large-volume articular defects by this method remains salvage in nature and palliative in outcome. PMID:26069688

  16. A large-signal dynamic simulation for the series resonant converter

    NASA Technical Reports Server (NTRS)

    King, R. J.; Stuart, T. A.

    1983-01-01

    A simple nonlinear discrete-time dynamic model for the series resonant dc-dc converter is derived using approximations appropriate to most power converters. This model is useful for the dynamic simulation of a series resonant converter using only a desktop calculator. The model is compared with a laboratory converter for a large transient event.

  17. Protein Simulation Data in the Relational Model.

    PubMed

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  18. Protein Simulation Data in the Relational Model

    PubMed Central

    Simms, Andrew M.; Daggett, Valerie

    2011-01-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost—significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server. PMID:23204646

  19. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE PAGES

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...

    2017-11-07

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  20. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  1. A regularized vortex-particle mesh method for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  2. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Moin, Parviz

    2003-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.

  3. Pore-scale simulations of drainage in granular materials: Finite size effects and the representative elementary volume

    NASA Astrophysics Data System (ADS)

    Yuan, Chao; Chareyre, Bruno; Darve, Félix

    2016-09-01

    A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the

  4. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The basic objective of this research is to extend the capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. In the efforts related to LES, we were primarily involved with assessing the performance of the various modern methods based on the Probability Density Function (PDF) methods for providing closures for treating the subgrid fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we concentrated on understanding some of the relevant physics of compressible reacting flows by means of statistical analysis of the data generated by DNS of such flows. In the research conducted in the second year of this program, our efforts focused on the modeling of homogeneous compressible turbulent flows by PDF methods, and on DNS of non-equilibrium reacting high speed mixing layers. Some preliminary work is also in progress on PDF modeling of shear flows, and also on LES of such flows.

  5. Simulation of DNAPL migration in heterogeneous translucent porous media based on estimation of representative elementary volume

    NASA Astrophysics Data System (ADS)

    Wu, Ming; Wu, Jianfeng; Wu, Jichun

    2017-10-01

    When the dense nonaqueous phase liquid (DNAPL) comes into the subsurface environment, its migration behavior is crucially affected by the permeability and entry pressure of subsurface porous media. A prerequisite for accurately simulating DNAPL migration in aquifers is then the determination of the permeability, entry pressure and corresponding representative elementary volumes (REV) of porous media. However, the permeability, entry pressure and corresponding representative elementary volumes (REV) are hard to determine clearly. This study utilizes the light transmission micro-tomography (LTM) method to determine the permeability and entry pressure of two dimensional (2D) translucent porous media and integrates the LTM with a criterion of relative gradient error to quantify the corresponding REV of porous media. As a result, the DNAPL migration in porous media might be accurately simulated by discretizing the model at the REV dimension. To validate the quantification methods, an experiment of perchloroethylene (PCE) migration is conducted in a two-dimensional heterogeneous bench-scale aquifer cell. Based on the quantifications of permeability, entry pressure and REV scales of 2D porous media determined by the LTM and relative gradient error, different models with different sizes of discretization grid are used to simulate the PCE migration. It is shown that the model based on REV size agrees well with the experimental results over the entire migration period including calibration, verification and validation processes. This helps to better understand the microstructures of porous media and achieve accurately simulating DNAPL migration in aquifers based on the REV estimation.

  6. Budesonide ameliorates lung injury induced by large volume ventilation.

    PubMed

    Ju, Ying-Nan; Yu, Kai-Jiang; Wang, Guo-Nian

    2016-06-04

    Ventilation-induced lung injury (VILI) is a health problem for patients with acute respiratory dysfunction syndrome. The aim of this study was to investigate the effectiveness of budesonide in treating VILI. Twenty-four rats were randomized to three groups: a ventilation group, ventilation/budesonide group, and sham group were ventilated with 30 ml/kg tidal volume or only anesthesia for 4 hor saline or budesonide airway instillation immediately after ventilation. The PaO2/FiO2and wet-to-dry weight ratios, protein concentration, neutrophil count, and neutrophil elastase levels in bronchoalveolar lavage fluid (BALF) and the levels of inflammation-related factors were examined. Histological evaluation of and apoptosis measurement inthe lung were conducted. Compared with that in the ventilation group, the PaO2/FiO2 ratio was significantly increased by treatment with budesonide. The lung wet-to-dry weight ratio, total protein, neutrophil elastase level, and neutrophilcount in BALF were decreased in the budesonide group. The BALF and plasma tumor necrosis factor (TNF)-α, interleukin (IL)-1β, IL-6, intercellular adhesion molecule (ICAM)-1, and macrophage inflammatory protein (MIP)-2 levels were decreased, whereas the IL-10 level was increased in the budesonide group. The phosphorylated nuclear factor (NF)-kBlevels in lung tissue were inhibited by budesonide. The histological changes in the lung and apoptosis were reduced by budesonide treatment. Bax, caspase-3, and cleaved caspase-3 were down-regulated, and Bcl-2 was up-regulated by budesonide. Budesonide ameliorated lung injury induced by large volume ventilation, likely by improving epithelial permeability, decreasing edema, inhibiting local and systemic inflammation, and reducing apoptosis in VILI.

  7. A scaling relationship for impact-induced melt volume

    NASA Astrophysics Data System (ADS)

    Nakajima, M.; Rubie, D. C.; Melosh, H., IV; Jacobson, S. A.; Golabek, G.; Nimmo, F.; Morbidelli, A.

    2016-12-01

    During the late stages of planetary accretion, protoplanets experience a number of giant impacts and extensive mantle melting. The impactor's core sinks through the molten part of the target mantle (magma ocean) and experiences metal-silicate partitioning (e.g., Stevenson, 1990). For understanding the chemical evolution of the planetary mantle and core, we need to determine the impact-induced melt volume because the partitioning strongly depends on the ranges of the pressures and temperatures within the magma ocean. Previous studies have investigated the effects of small impacts (i.e. impact cratering) on melt volume, but those for giant impacts are not well understood yet. Here, we perform giant impact simulations to derive a scaling law for melt volume as a function of impact velocity, impact angle, and impactor-to-target mass ratio. We use two different numerical codes, namely smoothed particle hydrodynamics we developed (SPH, a particle method) and the code iSALE (a grid-based method) to compare their outcomes. Our simulations show that these two codes generally agree as long as the same equation of state is used. We also find that some of the previous studies developed for small impacts (e.g., Abramov et al., 2012) overestimate giant impact melt volume by orders of magnitudes partly because these models do not consider self-gravity of the impacting bodies. Therefore, these models may not be extrapolated to large impacts. Our simulations also show that melt volume can be scaled by the total mass of the system. In this presentation, we further discuss geochemical implications for giant impacts on planets, including Earth and Mars.

  8. Assessing the Feasibility of Large-Scale Countercyclical Public Job-Creation. Final Report, Volume III. Selected Implications of Public Job-Creation.

    ERIC Educational Resources Information Center

    Urban Inst., Washington, DC.

    This last of a three-volume report of a study done to assess the feasibility of large-scale, countercyclical public job creation covers the findings regarding the priorities among projects, indirect employment effects, skill imbalances, and administrative issues; and summarizes the overall findings, conclusions, and recommendations. (Volume 1,…

  9. Colloids Versus Albumin in Large Volume Paracentesis to Prevent Circulatory Dysfunction: Evidence-based Case Report.

    PubMed

    Widjaja, Felix F; Khairan, Paramita; Kamelia, Telly; Hasan, Irsan

    2016-04-01

    Large volume paracentesis may cause paracentesis induced circulatory dysfunction (PICD). Albumin is recommended to prevent this abnormality. Meanwhile, the price of albumin is too expensive and there should be another alternative that may prevent PICD. This report aimed to compare albumin to colloids in preventing PICD. Search strategy was done using PubMed, Scopus, Proquest, dan Academic Health Complete from EBSCO with keywords of "ascites", "albumin", "colloid", "dextran", "hydroxyethyl starch", "gelatin", and "paracentesis induced circulatory dysfunction". Articles was limited to randomized clinical trial and meta-analysis with clinical question of "In hepatic cirrhotic patient undergone large volume paracentesis, whether colloids were similar to albumin to prevent PICD". We found one meta-analysis and four randomized clinical trials (RCT). A meta analysis showed that albumin was still superior of which odds ratio 0.34 (0.23-0.51). Three RCTs showed the same results and one RCT showed albumin was not superior than colloids. We conclude that colloids could not constitute albumin to prevent PICD, but colloids still have a role in patient who undergone paracentesis less than five liters.

  10. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  11. Statistical analysis of large simulated yield datasets for studying climate effects

    USDA-ARS?s Scientific Manuscript database

    Ensembles of process-based crop models are now commonly used to simulate crop growth and development for climate scenarios of temperature and/or precipitation changes corresponding to different projections of atmospheric CO2 concentrations. This approach generates large datasets with thousands of de...

  12. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy

    DOEpatents

    Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.

    2001-01-16

    Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of

  13. Alginate Hydrogel Microencapsulation Inhibits Devitrification and Enables Large-Volume Low-CPA Cell Vitrification

    PubMed Central

    Huang, Haishui; Choi, Jung Kyu; Rao, Wei; Zhao, Shuting; Agarwal, Pranay; Zhao, Gang

    2015-01-01

    Cryopreservation of stem cells is important to meet their ever-increasing demand by the burgeoning cell-based medicine. The conventional slow freezing for stem cell cryopreservation suffers from inevitable cell injury associated with ice formation and the vitrification (i.e., no visible ice formation) approach is emerging as a new strategy for cell cryopreservation. A major challenge to cell vitrification is intracellular ice formation (IIF, a lethal event to cells) induced by devitrification (i.e., formation of visible ice in previously vitrified solution) during warming the vitrified cells at cryogenic temperature back to super-zero temperatures. Consequently, high and toxic concentrations of penetrating cryoprotectants (i.e., high CPAs, up to ~8 M) and/or limited sample volumes (up to ~2.5 μl) have been used to minimize IIF during vitrification. We reveal that alginate hydrogel microencapsulation can effectively inhibit devitrification during warming. Our data show that if ice formation were minimized during cooling, IIF is negligible in alginate hydrogel-microencapsulated cells during the entire cooling and warming procedure of vitrification. This enables vitrification of pluripotent and multipotent stem cells with up to ~4 times lower concentration of penetrating CPAs (up to 2 M, low CPA) in up to ~100 times larger sample volume (up to ~250 μl, large volume). PMID:26640426

  14. Alginate Hydrogel Microencapsulation Inhibits Devitrification and Enables Large-Volume Low-CPA Cell Vitrification.

    PubMed

    Huang, Haishui; Choi, Jung Kyu; Rao, Wei; Zhao, Shuting; Agarwal, Pranay; Zhao, Gang; He, Xiaoming

    2015-11-25

    Cryopreservation of stem cells is important to meet their ever-increasing demand by the burgeoning cell-based medicine. The conventional slow freezing for stem cell cryopreservation suffers from inevitable cell injury associated with ice formation and the vitrification ( i.e. , no visible ice formation) approach is emerging as a new strategy for cell cryopreservation. A major challenge to cell vitrification is intracellular ice formation (IIF, a lethal event to cells) induced by devitrification ( i.e. , formation of visible ice in previously vitrified solution) during warming the vitrified cells at cryogenic temperature back to super-zero temperatures. Consequently, high and toxic concentrations of penetrating cryoprotectants ( i.e. , high CPAs, up to ~8 M) and/or limited sample volumes (up to ~2.5 μl) have been used to minimize IIF during vitrification. We reveal that alginate hydrogel microencapsulation can effectively inhibit devitrification during warming. Our data show that if ice formation were minimized during cooling, IIF is negligible in alginate hydrogel-microencapsulated cells during the entire cooling and warming procedure of vitrification. This enables vitrification of pluripotent and multipotent stem cells with up to ~4 times lower concentration of penetrating CPAs (up to 2 M, low CPA) in up to ~100 times larger sample volume (up to ~250 μl, large volume).

  15. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure

  16. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  17. Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects

    NASA Technical Reports Server (NTRS)

    Makowski, David; Asseng, Senthold; Ewert, Frank; Bassu, Simona; Durand, Jean-Louis; Martre, Pierre; Adam, Myriam; Aggarwal, Pramod K.; Angulo, Carlos; Baron, Chritian; hide

    2015-01-01

    Many studies have been carried out during the last decade to study the effect of climate change on crop yields and other key crop characteristics. In these studies, one or several crop models were used to simulate crop growth and development for different climate scenarios that correspond to different projections of atmospheric CO2 concentration, temperature, and rainfall changes (Semenov et al., 1996; Tubiello and Ewert, 2002; White et al., 2011). The Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013) builds on these studies with the goal of using an ensemble of multiple crop models in order to assess effects of climate change scenarios for several crops in contrasting environments. These studies generate large datasets, including thousands of simulated crop yield data. They include series of yield values obtained by combining several crop models with different climate scenarios that are defined by several climatic variables (temperature, CO2, rainfall, etc.). Such datasets potentially provide useful information on the possible effects of different climate change scenarios on crop yields. However, it is sometimes difficult to analyze these datasets and to summarize them in a useful way due to their structural complexity; simulated yield data can differ among contrasting climate scenarios, sites, and crop models. Another issue is that it is not straightforward to extrapolate the results obtained for the scenarios to alternative climate change scenarios not initially included in the simulation protocols. Additional dynamic crop model simulations for new climate change scenarios are an option but this approach is costly, especially when a large number of crop models are used to generate the simulated data, as in AgMIP. Statistical models have been used to analyze responses of measured yield data to climate variables in past studies (Lobell et al., 2011), but the use of a statistical model to analyze yields simulated by complex

  18. Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2012-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less

  19. Revealing the global map of protein folding space by large-scale simulations

    NASA Astrophysics Data System (ADS)

    Sinner, Claude; Lutz, Benjamin; Verma, Abhinav; Schug, Alexander

    2015-12-01

    The full characterization of protein folding is a remarkable long-standing challenge both for experiment and simulation. Working towards a complete understanding of this process, one needs to cover the full diversity of existing folds and identify the general principles driving the process. Here, we want to understand and quantify the diversity in folding routes for a large and representative set of protein topologies covering the full range from all alpha helical topologies towards beta barrels guided by the key question: Does the majority of the observed routes contribute to the folding process or only a particular route? We identified a set of two-state folders among non-homologous proteins with a sequence length of 40-120 residues. For each of these proteins, we ran native-structure based simulations both with homogeneous and heterogeneous contact potentials. For each protein, we simulated dozens of folding transitions in continuous uninterrupted simulations and constructed a large database of kinetic parameters. We investigate folding routes by tracking the formation of tertiary structure interfaces and discuss whether a single specific route exists for a topology or if all routes are equiprobable. These results permit us to characterize the complete folding space for small proteins in terms of folding barrier ΔG‡, number of routes, and the route specificity RT.

  20. Why build a virtual brain? Large-scale neural simulations as jump start for cognitive computing

    NASA Astrophysics Data System (ADS)

    Colombo, Matteo

    2017-03-01

    Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence.

  1. CALCIUM ABSORPTION IN MAN: BASED ON LARGE VOLUME LIQUID SCINTILLATION COUNTER STUDIES.

    PubMed

    LUTWAK, L; SHAPIRO, J R

    1964-05-29

    A technique has been developed for the in vivo measurement of absorption of calcium in man after oral administration of 1 to 5 microcuries of calcium-47 and continuous counting of the radiation in the subject's arm with a large volume liquid scintillation counter. The maximum value for the arm counting technique is proportional to the absorption of tracer as measured by direct stool analysis. The rate of uptake by the arm is lower in subjects with either the malabsorption syndrome or hypoparathyroidism. The administration of vitamin D increases both the absorption rate and the maximum amount of calcium absorbed.

  2. Simulating observations with HARMONI: the integral field spectrograph for the European Extremely Large Telescope

    NASA Astrophysics Data System (ADS)

    Zieleniewski, Simon; Thatte, Niranjan; Kendrew, Sarah; Houghton, Ryan; Tecza, Matthias; Clarke, Fraser; Fusco, Thierry; Swinbank, Mark

    2014-07-01

    With the next generation of extremely large telescopes commencing construction, there is an urgent need for detailed quantitative predictions of the scientific observations that these new telescopes will enable. Most of these new telescopes will have adaptive optics fully integrated with the telescope itself, allowing unprecedented spatial resolution combined with enormous sensitivity. However, the adaptive optics point spread function will be strongly wavelength dependent, requiring detailed simulations that accurately model these variations. We have developed a simulation pipeline for the HARMONI integral field spectrograph, a first light instrument for the European Extremely Large Telescope. The simulator takes high-resolution input data-cubes of astrophysical objects and processes them with accurate atmospheric, telescope and instrumental effects, to produce mock observed cubes for chosen observing parameters. The output cubes represent the result of a perfect data reduc- tion process, enabling a detailed analysis and comparison between input and output, showcasing HARMONI's capabilities. The simulations utilise a detailed knowledge of the telescope's wavelength dependent adaptive op- tics point spread function. We discuss the simulation pipeline and present an early example of the pipeline functionality for simulating observations of high redshift galaxies.

  3. Evaluation of a laser scanner for large volume coordinate metrology: a comparison of results before and after factory calibration

    NASA Astrophysics Data System (ADS)

    Ferrucci, M.; Muralikrishnan, B.; Sawyer, D.; Phillips, S.; Petrov, P.; Yakovlev, Y.; Astrelin, A.; Milligan, S.; Palmateer, J.

    2014-10-01

    Large volume laser scanners are increasingly being used for a variety of dimensional metrology applications. Methods to evaluate the performance of these scanners are still under development and there are currently no documentary standards available. This paper describes the results of extensive ranging and volumetric performance tests conducted on a large volume laser scanner. The results demonstrated small but clear systematic errors that are explained in the context of a geometric error model for the instrument. The instrument was subsequently returned to the manufacturer for factory calibration. The ranging and volumetric tests were performed again and the results are compared against those obtained prior to the factory calibration.

  4. Perturbative two- and three-loop coefficients from large b Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G.P. Lepage; P.B. Mackenzie; N.H. Shakespeare

    1999-10-18

    Perturbative coefficients for Wilson loops and the static quark self-energy are extracted from Monte Carlo simulations at large {beta} on finite volumes, where all the lattice momenta are large. The Monte Carlo results are in excellent agreement with perturbation theory through second order. New results for third order coefficients are reported. Twisted boundary conditions are used to eliminate zero modes and to suppress Z{sub 3} tunneling.

  5. Using the Large Fire Simulator System to map wildland fire potential for the conterminous United States

    Treesearch

    LaWen Hollingsworth; James Menakis

    2010-01-01

    This project mapped wildland fire potential (WFP) for the conterminous United States by using the large fire simulation system developed for Fire Program Analysis (FPA) System. The large fire simulation system, referred to here as LFSim, consists of modules for weather generation, fire occurrence, fire suppression, and fire growth modeling. Weather was generated with...

  6. A study on large-scale nudging effects in regional climate model simulation

    NASA Astrophysics Data System (ADS)

    Yhang, Yoo-Bin; Hong, Song-You

    2011-05-01

    The large-scale nudging effects on the East Asian summer monsoon (EASM) are examined using the National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM). The NCEP/DOE reanalysis data is used to provide large-scale forcings for RSM simulations, configured with an approximately 50-km grid over East Asia, centered on the Korean peninsula. The RSM with a variant of spectral nudging, that is, the scale selective bias correction (SSBC), is forced by perfect boundary conditions during the summers (June-July-August) from 1979 to 2004. The two summers of 2000 and 2004 are investigated to demonstrate the impact of SSBC on precipitation in detail. It is found that the effect of SSBC on the simulated seasonal precipitation is in general neutral without a discernible advantage. Although errors in large-scale circulation for both 2000 and 2004 are reduced by using the SSBC method, the impact on simulated precipitation is found to be negative in 2000 and positive in 2004 summers. One possible reason for a different effect is that precipitation in the summer of 2004 is characterized by a strong baroclinicity, while precipitation in 2000 is caused by thermodynamic instability. The reduction of convective rainfall over the oceans by the application of the SSBC method seems to play an important role in modeled atmosphere.

  7. Large-Eddy Simulation. Guidelines for Its Application to Planetary Boundary Layer Research

    DTIC Science & Technology

    1984-08-01

    34 engineering application of L98 was Deardorff’s simulation of turbulent channel flow, which was carried out at the National Center for Atmospheric...over the past 20 years, and yet in the perception of some observers * the applications of the resulting basic science to practical problem remain...COVERED -- Large Eddy Simulation: Guidelines for its .0 application to planetary boundary layer research Final Report Oct 83-Aug 84 S. PERFORMING ORG

  8. Studying marine stratus with large eddy simulation

    NASA Technical Reports Server (NTRS)

    Moeng, Chin-Hoh

    1990-01-01

    Data sets from field experiments over the stratocumulus regime may include complications from larger scale variations, decoupled cloud layers, diurnal cycle, or entrainment instability, etc. On top of the already complicated turbulence-radiation-condensation processes within the cloud-topped boundary layer (CTBL), these complexities may sometimes make interpretation of the data sets difficult. To study these processes, a better understanding is needed of the basic processes involved in the prototype CTBL. For example, is cloud top radiative cooling the primary source of the turbulent kinetic energy (TKE) within the CTBL. Historically, laboratory measurements have played an important role in addressing the turbulence problems. The CTBL is a turbulent field which is probably impossible to generate in laboratories. Large eddy simulation (LES) is an alternative way of 'measuring' the turbulent structure under controlled environments, which allows the systematic examination of the basic physical processes involved. However, there are problems with the LES approach for the CTBL. The LES data need to be consistent with the observed data. The LES approach is discussed, and results are given which provide some insights into the simulated turbulent flow field. Problems with this approach for the CTBL and information from the FIRE experiment needed to justify the LES results are discussed.

  9. Large eddy simulation of trailing edge noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Nitzkorski, Zane; Mahesh, Krishnan

    2015-11-01

    Noise generation is an important engineering constraint to many marine vehicles. A significant portion of the noise comes from propellers and rotors, specifically due to flow interactions at the trailing edge. Large eddy simulation is used to investigate the noise produced by a turbulent 45 degree beveled trailing edge and a NACA 0012 airfoil. A porous surface Ffowcs-Williams and Hawkings acoustic analogy is combined with a dynamic endcapping method to compute the sound. This methodology allows for the impact of incident flow noise versus the total noise to be assessed. LES results for the 45 degree beveled trailing edge are compared to experiment at M = 0 . 1 and Rec = 1 . 9 e 6 . The effect of boundary layer thickness on sound production is investigated by computing using both the experimental boundary layer thickness and a thinner boundary layer. Direct numerical simulation results of the NACA 0012 are compared to available data at M = 0 . 4 and Rec = 5 . 0 e 4 for both the hydrodynamic field and the acoustic field. Sound intensities and directivities are investigated and compared. Finally, some of the physical mechanisms of far-field noise generation, common to the two configurations, are discussed. Supported by Office of Naval research.

  10. Large eddy simulation on buoyant gas diffusion near building

    NASA Astrophysics Data System (ADS)

    Tominaga, Yoshihide; Murakami, Shuzo; Mochida, Akashi

    1992-12-01

    Large eddy simulations on turbulent diffusion of buoyant gases near a building model are carried out for three cases in which the densimetric Froude Number (Frd) was specified at - 8.6, zero and 8.6 respectively. The accuracy of these simulations is examined by comparing the numerically predicted results with wind tunnel experiments conducted. Two types of sub-grid scale models, the standard Smagorinsky model (type 1) and the modified Smagorinsky model (type 2) are compared. The former does not take account of the production of subgrid energy by buoyancy force but the latter incorporates this effect. The latter model (type 2) gives more accurate results than those given by the standard Smagorinsky model (type 1) in terms of the distributions of kappa greater than sign C less than sign greater than sign C(sup - 2) less than sign.

  11. Large-Eddy Simulations of Dust Devils and Convective Vortices

    NASA Astrophysics Data System (ADS)

    Spiga, Aymeric; Barth, Erika; Gu, Zhaolin; Hoffmann, Fabian; Ito, Junshi; Jemmett-Smith, Bradley; Klose, Martina; Nishizawa, Seiya; Raasch, Siegfried; Rafkin, Scot; Takemi, Tetsuya; Tyler, Daniel; Wei, Wei

    2016-11-01

    In this review, we address the use of numerical computations called Large-Eddy Simulations (LES) to study dust devils, and the more general class of atmospheric phenomena they belong to (convective vortices). We describe the main elements of the LES methodology. We review the properties, statistics, and variability of dust devils and convective vortices resolved by LES in both terrestrial and Martian environments. The current challenges faced by modelers using LES for dust devils are also discussed in detail.

  12. Simulations of an Offshore Wind Farm Using Large-Eddy Simulation and a Torque-Controlled Actuator Disc Model

    NASA Astrophysics Data System (ADS)

    Creech, Angus; Früh, Wolf-Gerrit; Maguire, A. Eoghan

    2015-05-01

    We present here a computational fluid dynamics (CFD) simulation of Lillgrund offshore wind farm, which is located in the Øresund Strait between Sweden and Denmark. The simulation combines a dynamic representation of wind turbines embedded within a large-eddy simulation CFD solver and uses hr-adaptive meshing to increase or decrease mesh resolution where required. This allows the resolution of both large-scale flow structures around the wind farm, and the local flow conditions at individual turbines; consequently, the response of each turbine to local conditions can be modelled, as well as the resulting evolution of the turbine wakes. This paper provides a detailed description of the turbine model which simulates the interaction between the wind, the turbine rotors, and the turbine generators by calculating the forces on the rotor, the body forces on the air, and instantaneous power output. This model was used to investigate a selection of key wind speeds and directions, investigating cases where a row of turbines would be fully aligned with the wind or at specific angles to the wind. Results shown here include presentations of the spin-up of turbines, the observation of eddies moving through the turbine array, meandering turbine wakes, and an extensive wind farm wake several kilometres in length. The key measurement available for cross-validation with operational wind farm data is the power output from the individual turbines, where the effect of unsteady turbine wakes on the performance of downstream turbines was a main point of interest. The results from the simulations were compared to the performance measurements from the real wind farm to provide a firm quantitative validation of this methodology. Having achieved good agreement between the model results and actual wind farm measurements, the potential of the methodology to provide a tool for further investigations of engineering and atmospheric science problems is outlined.

  13. Dust Emissions, Transport, and Deposition Simulated with the NASA Finite-Volume General Circulation Model

    NASA Technical Reports Server (NTRS)

    Colarco, Peter; daSilva, Arlindo; Ginoux, Paul; Chin, Mian; Lin, S.-J.

    2003-01-01

    Mineral dust aerosols have radiative impacts on Earth's atmosphere, have been implicated in local and regional air quality issues, and have been identified as vectors for transporting disease pathogens and bringing mineral nutrients to terrestrial and oceanic ecosystems. We present for the first time dust simulations using online transport and meteorological analysis in the NASA Finite-Volume General Circulation Model (FVGCM). Our dust formulation follows the formulation in the offline Georgia Institute of Technology-Goddard Global Ozone Chemistry Aerosol Radiation and Transport Model (GOCART) using a topographical source for dust emissions. We compare results of the FVGCM simulations with GOCART, as well as with in situ and remotely sensed observations. Additionally, we estimate budgets of dust emission and transport into various regions.

  14. Large-Eddy Simulation of Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Pruett, C. David; Sochacki, James S.

    1999-01-01

    This report summarizes work accomplished under a one-year NASA grant from NASA Langley Research Center (LaRC). The effort culminates three years of NASA-supported research under three consecutive one-year grants. The period of support was April 6, 1998, through April 5, 1999. By request, the grant period was extended at no-cost until October 6, 1999. Its predecessors have been directed toward adapting the numerical tool of large-eddy simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of subgrid-scale (SGS) models that incorporate time-domain filters.

  15. GMP Cryopreservation of Large Volumes of Cells for Regenerative Medicine: Active Control of the Freezing Process

    PubMed Central

    Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Gibbons, Stephanie; Morris, G. John

    2014-01-01

    Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to −60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze—viabilities at 93.4%±7.4%, viable cell numbers at 14.3±1.7 million nuclei/mL alginate, and protein secretion at 10.5±1.7

  16. Impact of a large density gradient on linear and nonlinear edge-localized mode simulations

    DOE PAGES

    Xi, P. W.; Xu, X. Q.; Xia, T. Y.; ...

    2013-09-27

    Here, the impact of a large density gradient on edge-localized modes (ELMs) is studied linearly and nonlinearly by employing both two-fluid and gyro-fluid simulations. In two-fluid simulations, the ion diamagnetic stabilization on high-n modes disappears when the large density gradient is taken into account. But gyro-fluid simulations show that the finite Larmor radius (FLR) effect can effectively stabilize high-n modes, so the ion diamagnetic effect alone is not sufficient to represent the FLR stabilizing effect. We further demonstrate that additional gyroviscous terms must be kept in the two-fluid model to recover the linear results from the gyro-fluid model. Nonlinear simulations show that the density variation significantly weakens the E × B shearing at the top of the pedestal and thus leads to more energy loss during ELMs. The turbulence spectrum after an ELM crash is measured and has the relation ofmore » $$P(k_{z})\\propto k_{z}^{-3.3}$$ .« less

  17. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... used in total parenteral nutrition. 201.323 Section 201.323 Food and Drugs FOOD AND DRUG ADMINISTRATION... parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in total parenteral nutrition (TPN) therapy must not exceed 25 micrograms per liter (µg/L). (b) The package insert of...

  18. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... used in total parenteral nutrition. 201.323 Section 201.323 Food and Drugs FOOD AND DRUG ADMINISTRATION... parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in total parenteral nutrition (TPN) therapy must not exceed 25 micrograms per liter (µg/L). (b) The package insert of...

  19. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... used in total parenteral nutrition. 201.323 Section 201.323 Food and Drugs FOOD AND DRUG ADMINISTRATION... parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in total parenteral nutrition (TPN) therapy must not exceed 25 micrograms per liter (µg/L). (b) The package insert of...

  20. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... used in total parenteral nutrition. 201.323 Section 201.323 Food and Drugs FOOD AND DRUG ADMINISTRATION... parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in total parenteral nutrition (TPN) therapy must not exceed 25 micrograms per liter (µg/L). (b) The package insert of...

  1. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... used in total parenteral nutrition. 201.323 Section 201.323 Food and Drugs FOOD AND DRUG ADMINISTRATION... parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in total parenteral nutrition (TPN) therapy must not exceed 25 micrograms per liter (µg/L). (b) The package insert of...

  2. Nonthermal ablation of deep brain targets: A simulation study on a large animal model

    PubMed Central

    Top, Can Barış; White, P. Jason; McDannold, Nathan J.

    2016-01-01

    Purpose: Thermal ablation with transcranial MRI-guided focused ultrasound (FUS) is currently limited to central brain targets because of heating and other beam effects caused by the presence of the skull. Recently, it was shown that it is possible to ablate tissues without depositing thermal energy by driving intravenously administered microbubbles to inertial cavitation using low-duty-cycle burst sonications. A recent study demonstrated that this ablation method could ablate tissue volumes near the skull base in nonhuman primates without thermally damaging the nearby bone. However, blood–brain disruption was observed in the prefocal region, and in some cases, this region contained small areas of tissue damage. The objective of this study was to analyze the experimental model with simulations and to interpret the cause of these effects. Methods: The authors simulated prior experiments where nonthermal ablation was performed in the brain in anesthetized rhesus macaques using a 220 kHz clinical prototype transcranial MRI-guided FUS system. Low-duty-cycle sonications were applied at deep brain targets with the ultrasound contrast agent Definity. For simulations, a 3D pseudospectral finite difference time domain tool was used. The effects of shear mode conversion, focal steering, skull aberrations, nonlinear propagation, and the presence of skull base on the pressure field were investigated using acoustic and elastic wave propagation models. Results: The simulation results were in agreement with the experimental findings in the prefocal region. In the postfocal region, however, side lobes were predicted by the simulations, but no effects were evident in the experiments. The main beam was not affected by the different simulated scenarios except for a shift of about 1 mm in peak position due to skull aberrations. However, the authors observed differences in the volume, amplitude, and distribution of the side lobes. In the experiments, a single element passive

  3. Assessment of dynamic closure for premixed combustion large eddy simulation

    NASA Astrophysics Data System (ADS)

    Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan

    2015-09-01

    Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.

  4. Large-eddy simulations of compressible convection on massively parallel computers. [stellar physics

    NASA Technical Reports Server (NTRS)

    Xie, Xin; Toomre, Juri

    1993-01-01

    We report preliminary implementation of the large-eddy simulation (LES) technique in 2D simulations of compressible convection carried out on the CM-2 massively parallel computer. The convective flow fields in our simulations possess structures similar to those found in a number of direct simulations, with roll-like flows coherent across the entire depth of the layer that spans several density scale heights. Our detailed assessment of the effects of various subgrid scale (SGS) terms reveals that they may affect the gross character of convection. Yet, somewhat surprisingly, we find that our LES solutions, and another in which the SGS terms are turned off, only show modest differences. The resulting 2D flows realized here are rather laminar in character, and achieving substantial turbulence may require stronger forcing and less dissipation.

  5. Nested high-resolution large-eddy simulations in WRF to support wind power

    NASA Astrophysics Data System (ADS)

    Mirocha, J.; Kirkil, G.; Kosovic, B.; Lundquist, J. K.

    2009-12-01

    The WRF model’s grid nesting capability provides a potentially powerful framework for simulating flow over a wide range of scales. One such application is computation of realistic inflow boundary conditions for large eddy simulations (LES) by nesting LES domains within mesoscale domains. While nesting has been widely and successfully applied at GCM to mesoscale resolutions, the WRF model’s nesting behavior at the high-resolution (Δx < 1000m) end of the spectrum is less well understood. Nesting LES within msoscale domains can significantly improve turbulent flow prediction at the scale of a wind park, providing a basis for superior site characterization, or for improved simulation of turbulent inflows encountered by turbines. We investigate WRF’s grid nesting capability at high mesh resolutions using nested mesoscale and large-eddy simulations. We examine the spatial scales required for flow structures to equilibrate to the finer mesh as flow enters a nest, and how the process depends on several parameters, including grid resolution, turbulence subfilter stress models, relaxation zones at nest interfaces, flow velocities, surface roughnesses, terrain complexity and atmospheric stability. Guidance on appropriate domain sizes and turbulence models for LES in light of these results is provided This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 LLNL-ABS-416482

  6. Numerics and subgrid‐scale modeling in large eddy simulations of stratocumulus clouds

    PubMed Central

    Mishra, Siddhartha; Schneider, Tapio; Kaul, Colleen M.; Tan, Zhihong

    2017-01-01

    Abstract Stratocumulus clouds are the most common type of boundary layer cloud; their radiative effects strongly modulate climate. Large eddy simulations (LES) of stratocumulus clouds often struggle to maintain fidelity to observations because of the sharp gradients occurring at the entrainment interfacial layer at the cloud top. The challenge posed to LES by stratocumulus clouds is evident in the wide range of solutions found in the LES intercomparison based on the DYCOMS‐II field campaign, where simulated liquid water paths for identical initial and boundary conditions varied by a factor of nearly 12. Here we revisit the DYCOMS‐II RF01 case and show that the wide range of previous LES results can be realized in a single LES code by varying only the numerical treatment of the equations of motion and the nature of subgrid‐scale (SGS) closures. The simulations that maintain the greatest fidelity to DYCOMS‐II observations are identified. The results show that using weighted essentially non‐oscillatory (WENO) numerics for all resolved advective terms and no explicit SGS closure consistently produces the highest‐fidelity simulations. This suggests that the numerical dissipation inherent in WENO schemes functions as a high‐quality, implicit SGS closure for this stratocumulus case. Conversely, using oscillatory centered difference numerical schemes for momentum advection, WENO numerics for scalars, and explicitly modeled SGS fluxes consistently produces the lowest‐fidelity simulations. We attribute this to the production of anomalously large SGS fluxes near the cloud tops through the interaction of numerical error in the momentum field with the scalar SGS model. PMID:28943997

  7. Numerics and subgrid-scale modeling in large eddy simulations of stratocumulus clouds.

    PubMed

    Pressel, Kyle G; Mishra, Siddhartha; Schneider, Tapio; Kaul, Colleen M; Tan, Zhihong

    2017-06-01

    Stratocumulus clouds are the most common type of boundary layer cloud; their radiative effects strongly modulate climate. Large eddy simulations (LES) of stratocumulus clouds often struggle to maintain fidelity to observations because of the sharp gradients occurring at the entrainment interfacial layer at the cloud top. The challenge posed to LES by stratocumulus clouds is evident in the wide range of solutions found in the LES intercomparison based on the DYCOMS-II field campaign, where simulated liquid water paths for identical initial and boundary conditions varied by a factor of nearly 12. Here we revisit the DYCOMS-II RF01 case and show that the wide range of previous LES results can be realized in a single LES code by varying only the numerical treatment of the equations of motion and the nature of subgrid-scale (SGS) closures. The simulations that maintain the greatest fidelity to DYCOMS-II observations are identified. The results show that using weighted essentially non-oscillatory (WENO) numerics for all resolved advective terms and no explicit SGS closure consistently produces the highest-fidelity simulations. This suggests that the numerical dissipation inherent in WENO schemes functions as a high-quality, implicit SGS closure for this stratocumulus case. Conversely, using oscillatory centered difference numerical schemes for momentum advection, WENO numerics for scalars, and explicitly modeled SGS fluxes consistently produces the lowest-fidelity simulations. We attribute this to the production of anomalously large SGS fluxes near the cloud tops through the interaction of numerical error in the momentum field with the scalar SGS model.

  8. Moduli thermalization and finite temperature effects in "big" divisor large volume D3/ D7 Swiss-cheese compactification

    NASA Astrophysics Data System (ADS)

    Shukla, Pramod

    2011-01-01

    In the context of Type IIB compactified on a large volume Swiss-Cheese orientifold in the presence of a mobile space-time filling D3-brane and stacks of fluxed D7-branes wrapping the "big" divisor Σ B of a Swiss-Cheese Calabi Yau in WCP 4[1, 1, 1, 6, 9], we explore various implications of moduli dynamics and discuss their couplings and decay into MSSM (-like) matter fields early in the history of universe to reach thermal equilibrium. Like finite temperature effects in O'KKLT, we observe that the local minimum of zero-temperature effective scalar potential is stable against any finite temperature corrections (up to two-loops) in large volume scenarios as well. Also we find that moduli are heavy enough to avoid any cosmological moduli problem.

  9. Application of Compressible Volume of Fluid Model in Simulating the Impact and Solidification of Hollow Spherical ZrO2 Droplet on a Surface

    NASA Astrophysics Data System (ADS)

    Safaei, Hadi; Emami, Mohsen Davazdah; Jazi, Hamidreza Salimi; Mostaghimi, Javad

    2017-12-01

    Applications of hollow spherical particles in thermal spraying process have been developed in recent years, accompanied by attempts in the form of experimental and numerical studies to better understand the process of impact of a hollow droplet on a surface. During such process, volume and density of the trapped gas inside droplet change. The numerical models should be able to simulate such changes and their consequent effects. The aim of this study is to numerically simulate the impact of a hollow ZrO2 droplet on a flat surface using the volume of fluid technique for compressible flows. An open-source, finite-volume-based CFD code was used to perform the simulations, where appropriate subprograms were added to handle the studied cases. Simulation results were compared with the available experimental data. Results showed that at high impact velocities ( U 0 > 100 m/s), the compression of trapped gas inside droplet played a significant role in the impact dynamics. In such velocities, the droplet splashed explosively. Compressibility effects result in a more porous splat, compared to the corresponding incompressible model. Moreover, the compressible model predicted a higher spread factor than the incompressible model, due to planetary structure of the splat.

  10. In vitro validation of a Pitot-based flow meter for the measurement of respiratory volume and flow in large animal anaesthesia.

    PubMed

    Moens, Yves P S; Gootjes, Peter; Ionita, Jean-Claude; Heinonen, Erkki; Schatzmann, Urs

    2009-05-01

    To remodel and validate commercially available monitors and their Pitot tube-based flow sensors for use in large animals, using in vitro techniques. Prospective, in vitro experiment. Both the original and the remodelled sensor were studied with a reference flow generator. Measurements were taken of the static flow-pressure relationship and linearity of the flow signal. Sensor airway resistance was calculated. Following recalibration of the host monitor, volumes ranging from 1 to 7 L were generated by a calibration syringe, and bias and precision of spirometric volume was determined. Where manual recalibration was not available, a conversion factor for volume measurement was determined. The influence of gas composition mixture and peak flow on the conversion factor was studied. Both the original and the remodelled sensor showed similar static flow-pressure relationships and linearity of the flow signal. Mean bias (%) of displayed values compared with the reference volume of 3, 5 and 7 L varied between -0.4% and +2.4%, and this was significantly smaller than that for 1 L (4.8% to +5.0%). Conversion factors for 3, 5 and 7 L were very similar (mean 6.00 +/- 0.2, range 5.91-6.06) and were not significantly influenced by the gas mixture used. Increasing peak flow caused a small decrease in the conversion factor. Volume measurement error and conversion factors for inspiration and expiration were close to identity. The combination of the host monitor with the remodelled flow sensor allowed accurate in vitro measurement of flows and volumes in a range expected during large animal anaesthesia. This combination has potential as a reliable spirometric monitor for use during large animal anaesthesia.

  11. Extraction and LOD control of colored interval volumes

    NASA Astrophysics Data System (ADS)

    Miyamura, Hiroko N.; Takeshima, Yuriko; Fujishiro, Issei; Saito, Takafumi

    2005-03-01

    Interval volume serves as a generalized isosurface and represents a three-dimensional subvolume for which the associated scalar filed values lie within a user-specified closed interval. In general, it is not an easy task for novices to specify the scalar field interval corresponding to their ROIs. In order to extract interval volumes from which desirable geometric features can be mined effectively, we propose a suggestive technique which extracts interval volumes automatically based on the global examination of the field contrast structure. Also proposed here is a simplification scheme for decimating resultant triangle patches to realize efficient transmission and rendition of large-scale interval volumes. Color distributions as well as geometric features are taken into account to select best edges to be collapsed. In addition, when a user wants to selectively display and analyze the original dataset, the simplified dataset is restructured to the original quality. Several simulated and acquired datasets are used to demonstrate the effectiveness of the present methods.

  12. High axial resolution imaging system for large volume tissues using combination of inclined selective plane illumination and mechanical sectioning

    PubMed Central

    Zhang, Qi; Yang, Xiong; Hu, Qinglei; Bai, Ke; Yin, Fangfang; Li, Ning; Gang, Yadong; Wang, Xiaojun; Zeng, Shaoqun

    2017-01-01

    To resolve fine structures of biological systems like neurons, it is required to realize microscopic imaging with sufficient spatial resolution in three dimensional systems. With regular optical imaging systems, high lateral resolution is accessible while high axial resolution is hard to achieve in a large volume. We introduce an imaging system for high 3D resolution fluorescence imaging of large volume tissues. Selective plane illumination was adopted to provide high axial resolution. A scientific CMOS working in sub-array mode kept the imaging area in the sample surface, which restrained the adverse effect of aberrations caused by inclined illumination. Plastic embedding and precise mechanical sectioning extended the axial range and eliminated distortion during the whole imaging process. The combination of these techniques enabled 3D high resolution imaging of large tissues. Fluorescent bead imaging showed resolutions of 0.59 μm, 0.47μm, and 0.59 μm in the x, y, and z directions, respectively. Data acquired from the volume sample of brain tissue demonstrated the applicability of this imaging system. Imaging of different depths showed uniform performance where details could be recognized in either the near-soma area or terminal area, and fine structures of neurons could be seen in both the xy and xz sections. PMID:29296503

  13. An adaptive maneuvering logic computer program for the simulation of one-on-one air-to-air combat. Volume 1: General description

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Fogel, L. J.; Phelps, J. P.

    1975-01-01

    A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.

  14. Large Volume, Optical and Opto-Mechanical Metrology Techniques for ISIM on JWST

    NASA Technical Reports Server (NTRS)

    Hadjimichael, Theo

    2015-01-01

    The final, flight build of the Integrated Science Instrument Module (ISIM) element of the James Webb Space Telescope is the culmination of years of work across many disciplines and partners. This paper covers the large volume, ambient, optical and opto-mechanical metrology techniques used to verify the mechanical integration of the flight instruments in ISIM, including optical pupil alignment. We present an overview of ISIM's integration and test program, which is in progress, with an emphasis on alignment and optical performance verification. This work is performed at NASA Goddard Space Flight Center, in close collaboration with the European Space Agency, the Canadian Space Agency, and the Mid-Infrared Instrument European Consortium.

  15. Changes in peak oxygen uptake and plasma volume in fit and unfit subjects following exposure to a simulation of microgravity

    NASA Technical Reports Server (NTRS)

    Convertino, V. A.

    1998-01-01

    To test the hypothesis that the magnitude of reduction in plasma volume and work capacity following exposure to simulated microgravity is dependent on the initial level of aerobic fitness, peak oxygen uptake (VO2peak) was measured in a group of physically fit subjects and compared with VO2peak in a group of relatively unfit subjects before and after 10 days of continuous 6 degrees head-down tilt (HDT). Ten fit subjects (40 +/- 2 year) with mean +/- SE VO2peak = 48.9 +/- 1.7 mL kg-1 min-1 were matched for age, height, and lean body weight with 10 unfit subjects (VO2peak = 37.7 +/- 1.6 mL kg-1 min-1). Before and after HDT, plasma, blood, and red cell volumes and body composition were measured and all subjects underwent a graded supine cycle ergometer test to determine VO2peak period needed. Reduced VO2peak in fit subjects (-16.2%) was greater than that of unfit subjects (-6.1%). Similarly, reductions in plasma (-18.3%) and blood volumes (-16.0%) in fit subjects were larger than those of unfit subjects (blood volume = -5.6%; plasma volume = -6.6%). Reduced plasma volume was associated with greater negative body fluid balance during the initial 24 h of HDT in the fit group (912 +/- 154 mL) compared with unfit subjects (453 +/- 200 mL). The percentage change for VO2peak correlated with percentage change in plasma volume (r = +0.79). Following exposure to simulated microgravity, fit subjects demonstrated larger reductions in VO2peak than unfit subjects which was associated with larger reductions in plasma and blood volume. These data suggest that the magnitude of physical deconditioning induced by exposure to microgravity without intervention of countermeasures was influenced by the initial fitness of the subjects.

  16. Ultra-large scale AFM of lipid droplet arrays: investigating the ink transfer volume in dip pen nanolithography.

    PubMed

    Förste, Alexander; Pfirrmann, Marco; Sachs, Johannes; Gröger, Roland; Walheim, Stefan; Brinkmann, Falko; Hirtz, Michael; Fuchs, Harald; Schimmel, Thomas

    2015-05-01

    There are only few quantitative studies commenting on the writing process in dip-pen nanolithography with lipids. Lipids are important carrier ink molecules for the delivery of bio-functional patters in bio-nanotechnology. In order to better understand and control the writing process, more information on the transfer of lipid material from the tip to the substrate is needed. The dependence of the transferred ink volume on the dwell time of the tip on the substrate was investigated by topography measurements with an atomic force microscope (AFM) that is characterized by an ultra-large scan range of 800 × 800 μm(2). For this purpose arrays of dots of the phospholipid1,2-dioleoyl-sn-glycero-3-phosphocholine were written onto planar glass substrates and the resulting pattern was imaged by large scan area AFM. Two writing regimes were identified, characterized of either a steady decline or a constant ink volume transfer per dot feature. For the steady state ink transfer, a linear relationship between the dwell time and the dot volume was determined, which is characterized by a flow rate of about 16 femtoliters per second. A dependence of the ink transport from the length of pauses before and in between writing the structures was observed and should be taken into account during pattern design when aiming at best writing homogeneity. The ultra-large scan range of the utilized AFM allowed for a simultaneous study of the entire preparation area of almost 1 mm(2), yielding good statistic results.

  17. Ultra-large scale AFM of lipid droplet arrays: investigating the ink transfer volume in dip pen nanolithography

    NASA Astrophysics Data System (ADS)

    Förste, Alexander; Pfirrmann, Marco; Sachs, Johannes; Gröger, Roland; Walheim, Stefan; Brinkmann, Falko; Hirtz, Michael; Fuchs, Harald; Schimmel, Thomas

    2015-05-01

    There are only few quantitative studies commenting on the writing process in dip-pen nanolithography with lipids. Lipids are important carrier ink molecules for the delivery of bio-functional patters in bio-nanotechnology. In order to better understand and control the writing process, more information on the transfer of lipid material from the tip to the substrate is needed. The dependence of the transferred ink volume on the dwell time of the tip on the substrate was investigated by topography measurements with an atomic force microscope (AFM) that is characterized by an ultra-large scan range of 800 × 800 μm2. For this purpose arrays of dots of the phospholipid1,2-dioleoyl-sn-glycero-3-phosphocholine were written onto planar glass substrates and the resulting pattern was imaged by large scan area AFM. Two writing regimes were identified, characterized of either a steady decline or a constant ink volume transfer per dot feature. For the steady state ink transfer, a linear relationship between the dwell time and the dot volume was determined, which is characterized by a flow rate of about 16 femtoliters per second. A dependence of the ink transport from the length of pauses before and in between writing the structures was observed and should be taken into account during pattern design when aiming at best writing homogeneity. The ultra-large scan range of the utilized AFM allowed for a simultaneous study of the entire preparation area of almost 1 mm2, yielding good statistic results.

  18. Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse; McDaniel, James; Baurle, Robert A.

    2013-01-01

    Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.

  19. Volume-Of-Fluid Simulation for Predicting Two-Phase Cooling in a Microchannel

    NASA Astrophysics Data System (ADS)

    Gorle, Catherine; Parida, Pritish; Houshmand, Farzad; Asheghi, Mehdi; Goodson, Kenneth

    2014-11-01

    Two-phase flow in microfluidic geometries has applications of increasing interest for next generation electronic and optoelectronic systems, telecommunications devices, and vehicle electronics. While there has been progress on comprehensive simulation of two-phase flows in compact geometries, validation of the results in different flow regimes should be considered to determine the predictive capabilities. In the present study we use the volume-of-fluid method to model the flow through a single micro channel with cross section 100 × 100 μm and length 10 mm. The channel inlet mass flux and the heat flux at the lower wall result in a subcooled boiling regime in the first 2.5 mm of the channel and a saturated flow regime further downstream. A conservation equation for the vapor volume fraction, and a single set of momentum and energy equations with volume-averaged fluid properties are solved. A reduced-physics phase change model represents the evaporation of the liquid and the corresponding heat loss, and the surface tension is accounted for by a source term in the momentum equation. The phase change model used requires the definition of a time relaxation parameter, which can significantly affect the solution since it determines the rate of evaporation. The results are compared to experimental data available from literature, focusing on the capability of the reduced-physics phase change model to predict the correct flow pattern, temperature profile and pressure drop.

  20. Exemplar for simulation challenges: Large-deformation micromechanics of Sylgard 184/glass microballoon syntactic foams.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Judith Alice; Long, Kevin Nicholas

    2018-05-01

    Sylgard® 184/Glass Microballoon (GMB) potting material is currently used in many NW systems. Analysts need a macroscale constitutive model that can predict material behavior under complex loading and damage evolution. To address this need, ongoing modeling and experimental efforts have focused on study of damage evolution in these materials. Micromechanical finite element simulations that resolve individual GMB and matrix components promote discovery and better understanding of the material behavior. With these simulations, we can study the role of the GMB volume fraction, time-dependent damage, behavior under confined vs. unconfined compression, and the effects of partial damage. These simulations are challengingmore » and push the boundaries of capability even with the high performance computing tools available at Sandia. We summarize the major challenges and the current state of this modeling effort, as an exemplar of micromechanical modeling needs that can motivate advances in future computing efforts.« less

  1. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not

  2. Numerical methods for large eddy simulation of acoustic combustion instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton T.

    Acoustic combustion instabilities occur when interaction between the combustion process and acoustic modes in a combustor results in periodic oscillations in pressure, velocity, and heat release. If sufficiently large in amplitude, these instabilities can cause operational difficulties or the failure of combustor hardware. In many situations, the dominant instability is the result of the interaction between a low frequency acoustic mode of the combustor and the large scale hydrodynamics. Large eddy simulation (LES), therefore, is a promising tool for the prediction of these instabilities, since both the low frequency acoustic modes and the large scale hydrodynamics are well resolved in LES. Problems with the tractability of such simulations arise, however, due to the difficulty of solving the compressible Navier-Stokes equations efficiently at low Mach number and due to the large number of acoustic periods that are often required for such instabilities to reach limit cycles. An implicit numerical method for the solution of the compressible Navier-Stokes equations has been developed which avoids the acoustic CFL restriction, allowing for significant efficiency gains at low Mach number, while still resolving the low frequency acoustic modes of interest. In the limit of a uniform grid the numerical method causes no artificial damping of acoustic waves. New, non-reflecting boundary conditions have also been developed for use with the characteristic-based approach of Poinsot and Lele (1992). The new boundary conditions are implemented in a manner which allows for significant reduction of the computational domain of an LES by eliminating the need to perform LES in regions where one-dimensional acoustics significantly affect the instability but details of the hydrodynamics do not. These new numerical techniques have been demonstrated in an LES of an experimental combustor. The new techniques are shown to be an efficient means of performing LES of acoustic combustion

  3. Percutaneous Image-Guided Cryoablation of Challenging Mediastinal Lesions Using Large-Volume Hydrodissection: Technical Considerations and Outcomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garnon, Julien, E-mail: juliengarnon@gmail.com; Koch, Guillaume, E-mail: Guillaume.koch@gmail.com; Caudrelier, Jean, E-mail: caudjean@yahoo.fr

    ObjectiveThis study was designed to describe the technique of percutaneous image-guided cryoablation with large-volume hydrodissection for the treatment of challenging mediastinal lesions.MethodsBetween March 2014 and June 2015, three patients (mean age 62.7 years) with four neoplastic anterior mediastinal lesions underwent five cryoablation procedures using large-volume hydrodissection. Procedures were performed under general anaesthesia using CT guidance. Lesion characteristics, hydrodissection and cryoablation data, technical success, complications, and clinical outcomes were assessed using retrospective chart review.ResultsLesions (mean size 2.7 cm; range 2–4.3 cm) were in contact with great vessels (n = 13), trachea (n = 3), and mediastinal nerves (n = 6). Hydrodissection was performed intercostally (n = 4), suprasternally (n = 2), transsternally (n = 1), ormore » via the sternoclavicular joint (n = 1) using 1–3 spinal needles over 13.4 (range 7–26) minutes; 450 ml of dilute contrast was injected (range 300–600 ml) and increased mean lesion-collateral structure distance from 1.9 to 7.7 mm. Vulnerable mediastinal nerves were identified in four of five procedures. Technical success was 100 %, with one immediate complication (recurrent laryngeal nerve injury). Mean follow-up period was 15 months. One lesion demonstrated residual disease on restaging PET-CT and was retreated to achieve complete ablation. At last follow-up, two patients remained disease-free, and one patient developed distant disease after 1 year without local recurrence.ConclusionsCryoablation using large-volume hydrodissection is a feasible technique, enabling safe and effective treatment of challenging mediastinal lesions.« less

  4. Cognitive, Social, and Literacy Competencies: The Chelsea Bank Simulation Project. Year One: Final Report. [Volume 2]: Appendices.

    ERIC Educational Resources Information Center

    Duffy, Thomas; And Others

    This supplementary volume presents appendixes A-E associated with a 1-year study which determined what secondary school students were doing as they engaged in the Chelsea Bank computer software simulation activities. Appendixes present the SCANS Analysis Coding Sheet; coding problem analysis of 50 video segments; student and teacher interview…

  5. A large volume particulate and water multi-sampler with in situ preservation for microbial and biogeochemical studies

    NASA Astrophysics Data System (ADS)

    Breier, J. A.; Sheik, C. S.; Gomez-Ibanez, D.; Sayre-McCord, R. T.; Sanger, R.; Rauch, C.; Coleman, M.; Bennett, S. A.; Cron, B. R.; Li, M.; German, C. R.; Toner, B. M.; Dick, G. J.

    2014-12-01

    A new tool was developed for large volume sampling to facilitate marine microbiology and biogeochemical studies. It was developed for remotely operated vehicle and hydrocast deployments, and allows for rapid collection of multiple sample types from the water column and dynamic, variable environments such as rising hydrothermal plumes. It was used successfully during a cruise to the hydrothermal vent systems of the Mid-Cayman Rise. The Suspended Particulate Rosette V2 large volume multi-sampling system allows for the collection of 14 sample sets per deployment. Each sample set can include filtered material, whole (unfiltered) water, and filtrate. Suspended particulate can be collected on filters up to 142 mm in diameter and pore sizes down to 0.2 μm. Filtration is typically at flowrates of 2 L min-1. For particulate material, filtered volume is constrained only by sampling time and filter capacity, with all sample volumes recorded by digital flowmeter. The suspended particulate filter holders can be filled with preservative and sealed immediately after sample collection. Up to 2 L of whole water, filtrate, or a combination of the two, can be collected as part of each sample set. The system is constructed of plastics with titanium fasteners and nickel alloy spring loaded seals. There are no ferrous alloys in the sampling system. Individual sample lines are prefilled with filtered, deionized water prior to deployment and remain sealed unless a sample is actively being collected. This system is intended to facilitate studies concerning the relationship between marine microbiology and ocean biogeochemistry.

  6. Gas-Grain Simulation Facility: Fundamental studies of particle formation and interactions. Volume 1: Executive summary and overview

    NASA Technical Reports Server (NTRS)

    Fogleman, Guy (Editor); Huntington, Judith L. (Editor); Schwartz, Deborah E. (Editor); Fonda, Mark L. (Editor)

    1989-01-01

    An overview of the Gas-Grain Simulation Facility (GGSF) project and its current status is provided. The proceedings of the Gas-Grain Simulation Facility Experiments Workshop are recorded. The goal of the workshop was to define experiments for the GGSF--a small particle microgravity research facility. The workshop addressed the opportunity for performing, in Earth orbit, a wide variety of experiments that involve single small particles (grains) or clouds of particles. The first volume includes the executive summary, overview, scientific justification, history, and planned development of the Facility.

  7. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    DOE PAGES

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less

  8. Gas-Grain Simulation Facility: Fundamental studies of particle formation and interactions. Volume 2: Abstracts, candidate experiments and feasibility study

    NASA Technical Reports Server (NTRS)

    Fogleman, Guy (Editor); Huntington, Judith L. (Editor); Schwartz, Deborah E. (Editor); Fonda, Mark L. (Editor)

    1989-01-01

    An overview of the Gas-Grain Simulation Facility (GGSF) project and its current status is provided. The proceedings of the Gas-Grain Simulation Facility Experiments Workshop are recorded. The goal of the workshop was to define experiments for the GGSF--a small particle microgravity research facility. The workshop addressed the opportunity for performing, in Earth orbit, a wide variety of experiments that involve single small particles (grains) or clouds of particles. Twenty experiments from the fields of exobiology, planetary science, astrophysics, atmospheric science, biology, physics, and chemistry were described at the workshop and are outlined in Volume 2. Each experiment description included specific scientific objectives, an outline of the experimental procedure, and the anticipated GGSF performance requirements. Since these experiments represent the types of studies that will ultimately be proposed for the facility, they will be used to define the general science requirements of the GGSF. Also included in the second volume is a physics feasibility study and abstracts of example Gas-Grain Simulation Facility experiments and related experiments in progress.

  9. Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.

    PubMed

    Hotta, H; Rempel, M; Yokoyama, T

    2016-03-25

    The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers. Copyright © 2016, American Association for the Advancement of Science.

  10. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    NASA Astrophysics Data System (ADS)

    Wolf-Grosse, Tobias; Esau, Igor; Reuder, Joachim

    2017-06-01

    Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s-1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES) experiments with the Parallelised Large-Eddy Simulation Model (PALM) for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a relatively small local water

  11. Influence of large hiatus hernia on cardiac volumes. A prospective observational cohort study by cardiovascular magnetic resonance.

    PubMed

    Milito, Pamela; Lombardi, Massimo; Asti, Emanuele; Bonitta, Gianluca; Fina, Dario; Bandera, Francesco; Bonavina, Luigi

    2018-05-09

    Large hiatus hernia (LHH) is often associated with post-prandial dyspnea, palpitations or chest discomfort, but its effect on cardiac volumes and performance is still debated. Before and 3-months after laparoscopic repair, 35 patients underwent cardiovascular magnetic resonance (CMR) in the fasting state and after a standardized meal. Preoperatively, LHH size increased significantly after meal (p < 0.010). Compared to the fasting state, a systematic trend of volume reduction of the cardiac chambers was observed. In addition, both the left ventricle stroke volume (p = 0.012) and the ejection fraction (p = 0.010) were significantly reduced. At 3-months after surgery there was a statistically significant increase in left atrial volume (p = 0.029), overall left ventricle volume (p < 0.05) and right ventricle end-systolic volume (p = 0.046). Both FEV 1 (Forced expiratory volume) (p = 0.02) and FVC (Forced Vital Capacity) (p = 0.01) values significantly improved after surgery. Cardiorespiratory symptoms significantly improved compared to pre-operative values (p < 0.01). The global heart function was significantly impaired by a standardized meal in the presence of a LHH. Restoration of the cardiac physiological status and improvement of clinical symptoms were noted after surgery. A multidisciplinary evaluation and CMR with a challenge meal may be added to routine pre-operative testing to select symptomatic patients for surgical hernia repair. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The BREAST-V: a unifying predictive formula for volume assessment in small, medium, and large breasts.

    PubMed

    Longo, Benedetto; Farcomeni, Alessio; Ferri, Germano; Campanale, Antonella; Sorotos, Micheal; Santanelli, Fabio

    2013-07-01

    Breast volume assessment enhances preoperative planning of both aesthetic and reconstructive procedures, helping the surgeon in the decision-making process of shaping the breast. Numerous methods of breast size determination are currently reported but are limited by methodologic flaws and variable estimations. The authors aimed to develop a unifying predictive formula for volume assessment in small to large breasts based on anthropomorphic values. Ten anthropomorphic breast measurements and direct volumes of 108 mastectomy specimens from 88 women were collected prospectively. The authors performed a multivariate regression to build the optimal model for development of the predictive formula. The final model was then internally validated. A previously published formula was used as a reference. Mean (±SD) breast weight was 527.9 ± 227.6 g (range, 150 to 1250 g). After model selection, sternal notch-to-nipple, inframammary fold-to-nipple, and inframammary fold-to-fold projection distances emerged as the most important predictors. The resulting formula (the BREAST-V) showed an adjusted R of 0.73. The estimated expected absolute error on new breasts is 89.7 g (95 percent CI, 62.4 to 119.1 g) and the expected relative error is 18.4 percent (95 percent CI, 12.9 to 24.3 percent). Application of reference formula on the sample yielded worse predictions than those derived by the formula, showing an R of 0.55. The BREAST-V is a reliable tool for predicting small to large breast volumes accurately for use as a complementary device in surgeon evaluation. An app entitled BREAST-V for both iOS and Android devices is currently available for free download in the Apple App Store and Google Play Store. Diagnostic, II.

  13. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    NASA Astrophysics Data System (ADS)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  14. Density-functional theory simulation of large quantum dots

    NASA Astrophysics Data System (ADS)

    Jiang, Hong; Baranger, Harold U.; Yang, Weitao

    2003-10-01

    Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.

  15. Numerical simulation of sloshing with large deforming free surface by MPS-LES method

    NASA Astrophysics Data System (ADS)

    Pan, Xu-jie; Zhang, Huai-xin; Sun, Xue-yao

    2012-12-01

    Moving particle semi-implicit (MPS) method is a fully Lagrangian particle method which can easily solve problems with violent free surface. Although it has demonstrated its advantage in ocean engineering applications, it still has some defects to be improved. In this paper, MPS method is extended to the large eddy simulation (LES) by coupling with a sub-particle-scale (SPS) turbulence model. The SPS turbulence model turns into the Reynolds stress terms in the filtered momentum equation, and the Smagorinsky model is introduced to describe the Reynolds stress terms. Although MPS method has the advantage in the simulation of the free surface flow, a lot of non-free surface particles are treated as free surface particles in the original MPS model. In this paper, we use a new free surface tracing method and the key point is "neighbor particle". In this new method, the zone around each particle is divided into eight parts, and the particle will be treated as a free surface particle as long as there are no "neighbor particles" in any two parts of the zone. As the number density parameter judging method has a high efficiency for the free surface particles tracing, we combine it with the neighbor detected method. First, we select out the particles which may be mistreated with high probabilities by using the number density parameter judging method. And then we deal with these particles with the neighbor detected method. By doing this, the new mixed free surface tracing method can reduce the mistreatment problem efficiently. The serious pressure fluctuation is an obvious defect in MPS method, and therefore an area-time average technique is used in this paper to remove the pressure fluctuation with a quite good result. With these improvements, the modified MPS-LES method is applied to simulate liquid sloshing problems with large deforming free surface. Results show that the modified MPS-LES method can simulate the large deforming free surface easily. It can not only capture

  16. Large eddy simulation for aerodynamics: status and perspectives.

    PubMed

    Sagaut, Pierre; Deck, Sébastien

    2009-07-28

    The present paper provides an up-to-date survey of the use of large eddy simulation (LES) and sequels for engineering applications related to aerodynamics. Most recent landmark achievements are presented. Two categories of problem may be distinguished whether the location of separation is triggered by the geometry or not. In the first case, LES can be considered as a mature technique and recent hybrid Reynolds-averaged Navier-Stokes (RANS)-LES methods do not allow for a significant increase in terms of geometrical complexity and/or Reynolds number with respect to classical LES. When attached boundary layers have a significant impact on the global flow dynamics, the use of hybrid RANS-LES remains the principal strategy to reduce computational cost compared to LES. Another striking observation is that the level of validation is most of the time restricted to time-averaged global quantities, a detailed analysis of the flow unsteadiness being missing. Therefore, a clear need for detailed validation in the near future is identified. To this end, new issues, such as uncertainty and error quantification and modelling, will be of major importance. First results dealing with uncertainty modelling in unsteady turbulent flow simulation are presented.

  17. Detection of triazole deicing additives in soil samples from airports with low, mid, and large volume aircraft deicing activities.

    PubMed

    McNeill, K S; Cancilla, D A

    2009-03-01

    Soil samples from three USA airports representing low, mid, and large volume users of aircraft deicing fluids (ADAFs) were analyzed by LC/MS/MS for the presence of triazoles, a class of corrosion inhibitors historically used in ADAFs. Triazoles, specifically the 4-methyl-1H-benzotriazole and the 5-methyl-1H-benzotriazole, were detected in a majority of samples and ranged from 2.35 to 424.19 microg/kg. Previous studies have focused primarily on ground and surface water impacts of larger volume ADAF users. The detection of triazoles in soils at low volume ADAF use airports suggests that deicing activities may have a broader environmental impact than previously considered.

  18. Scalable Cloning on Large-Scale GPU Platforms with Application to Time-Stepped Simulations on Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B.; Perumalla, Kalyan S.

    Cloning is a technique to efficiently simulate a tree of multiple what-if scenarios that are unraveled during the course of a base simulation. However, cloned execution is highly challenging to realize on large, distributed memory computing platforms, due to the dynamic nature of the computational load across clones, and due to the complex dependencies spanning the clone tree. In this paper, we present the conceptual simulation framework, algorithmic foundations, and runtime interface of CloneX, a new system we designed for scalable simulation cloning. It efficiently and dynamically creates whole logical copies of a dynamic tree of simulations across a largemore » parallel system without full physical duplication of computation and memory. The performance of a prototype implementation executed on up to 1,024 graphical processing units of a supercomputing system has been evaluated with three benchmarks—heat diffusion, forest fire, and disease propagation models—delivering a speed up of over two orders of magnitude compared to replicated runs. Finally, the results demonstrate a significantly faster and scalable way to execute many what-if scenario ensembles of large simulations via cloning using the CloneX interface.« less

  19. Scalable Cloning on Large-Scale GPU Platforms with Application to Time-Stepped Simulations on Grids

    DOE PAGES

    Yoginath, Srikanth B.; Perumalla, Kalyan S.

    2018-01-31

    Cloning is a technique to efficiently simulate a tree of multiple what-if scenarios that are unraveled during the course of a base simulation. However, cloned execution is highly challenging to realize on large, distributed memory computing platforms, due to the dynamic nature of the computational load across clones, and due to the complex dependencies spanning the clone tree. In this paper, we present the conceptual simulation framework, algorithmic foundations, and runtime interface of CloneX, a new system we designed for scalable simulation cloning. It efficiently and dynamically creates whole logical copies of a dynamic tree of simulations across a largemore » parallel system without full physical duplication of computation and memory. The performance of a prototype implementation executed on up to 1,024 graphical processing units of a supercomputing system has been evaluated with three benchmarks—heat diffusion, forest fire, and disease propagation models—delivering a speed up of over two orders of magnitude compared to replicated runs. Finally, the results demonstrate a significantly faster and scalable way to execute many what-if scenario ensembles of large simulations via cloning using the CloneX interface.« less

  20. Large Eddy Simulation of Supercritical CO2 Through Bend Pipes

    NASA Astrophysics Data System (ADS)

    He, Xiaoliang; Apte, Sourabh; Dogan, Omer

    2017-11-01

    Supercritical Carbon Dioxide (sCO2) is investigated as working fluid for power generation in thermal solar, fossil energy and nuclear power plants at high pressures. Severe erosion has been observed in the sCO2 test loops, particularly in nozzles, turbine blades and pipe bends. It is hypothesized that complex flow features such as flow separation and property variations may lead to large oscillations in the wall shear stresses and result in material erosion. In this work, large eddy simulations are conducted at different Reynolds numbers (5000, 27,000 and 50,000) to investigate the effect of heat transfer in a 90 degree bend pipe with unit radius of curvature in order to identify the potential causes of the erosion. The simulation is first performed without heat transfer to validate the flow solver against available experimental and computational studies. Mean flow statistics, turbulent kinetic energy, shear stresses and wall force spectra are computed and compared with available experimental data. Formation of counter-rotating vortices, named Dean vortices, are observed. Secondary flow pattern and swirling-switching flow motions are identified and visualized. Effects of heat transfer on these flow phenomena are then investigated by applying a constant heat flux at the wall. DOE Fossil Energy Crosscutting Technology Research Program.

  1. On simulating large earthquakes by Green's-function addition of smaller earthquakes

    NASA Astrophysics Data System (ADS)

    Joyner, William B.; Boore, David M.

    Simulation of ground motion from large earthquakes has been attempted by a number of authors using small earthquakes (subevents) as Green's functions and summing them, generally in a random way. We present a simple model for the random summation of subevents to illustrate how seismic scaling relations can be used to constrain methods of summation. In the model η identical subevents are added together with their start times randomly distributed over the source duration T and their waveforms scaled by a factor κ. The subevents can be considered to be distributed on a fault with later start times at progressively greater distances from the focus, simulating the irregular propagation of a coherent rupture front. For simplicity the distance between source and observer is assumed large compared to the source dimensions of the simulated event. By proper choice of η and κ the spectrum of the simulated event deduced from these assumptions can be made to conform at both low- and high-frequency limits to any arbitrary seismic scaling law. For the ω -squared model with similarity (that is, with constant Moƒ3o scaling, where ƒo is the corner frequency), the required values are η = (Mo/Moe)4/3 and κ = (Mo/Moe)-1/3, where Mo is moment of the simulated event and Moe is the moment of the subevent. The spectra resulting from other choices of η and κ, will not conform at both high and low frequency. If η is determined by the ratio of the rupture area of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at high frequency to the ω-squared model with similarity, but not at low frequency. Because the high-frequency part of the spectrum is generally the important part for engineering applications, however, this choice of values for η and κ may be satisfactory in many cases. If η is determined by the ratio of the moment of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at low frequency to

  2. Flow-through electroporation based on constant voltage for large-volume transfection of cells.

    PubMed

    Geng, Tao; Zhan, Yihong; Wang, Hsiang-Yu; Witting, Scott R; Cornetta, Kenneth G; Lu, Chang

    2010-05-21

    Genetic modification of cells is a critical step involved in many cell therapy and gene therapy protocols. In these applications, cell samples of large volume (10(8)-10(9)cells) are often processed for transfection. This poses new challenges for current transfection methods and practices. Here we present a novel flow-through electroporation method for delivery of genes into cells at high flow rates (up to approximately 20 mL/min) based on disposable microfluidic chips, a syringe pump, and a low-cost direct current (DC) power supply that provides a constant voltage. By eliminating pulse generators used in conventional electroporation, we dramatically lowered the cost of the apparatus and improved the stability and consistency of the electroporation field for long-time operation. We tested the delivery of pEFGP-C1 plasmids encoding enhanced green fluorescent protein into Chinese hamster ovary (CHO-K1) cells in the devices of various dimensions and geometries. Cells were mixed with plasmids and then flowed through a fluidic channel continuously while a constant voltage was established across the device. Together with the applied voltage, the geometry and dimensions of the fluidic channel determined the electrical parameters of the electroporation. With the optimal design, approximately 75% of the viable CHO cells were transfected after the procedure. We also generalize the guidelines for scaling up these flow-through electroporation devices. We envision that this technique will serve as a generic and low-cost tool for a variety of clinical applications requiring large volume of transfected cells. Copyright 2010 Elsevier B.V. All rights reserved.

  3. Railroads and the Environment : Estimation of Fuel Consumption in Rail Transportation : Volume 3. Comparison of Computer Simulations with Field Measurements

    DOT National Transportation Integrated Search

    1978-09-01

    This report documents comparisons between extensive rail freight service measurements (previously presented in Volume II) and simulations of the same operations using a sophisticated train performance calculator computer program. The comparisons cove...

  4. Evolution of precipitation extremes in two large ensembles of climate simulations

    NASA Astrophysics Data System (ADS)

    Martel, Jean-Luc; Mailhot, Alain; Talbot, Guillaume; Brissette, François; Ludwig, Ralf; Frigon, Anne; Leduc, Martin; Turcotte, Richard

    2017-04-01

    Recent studies project significant changes in the future distribution of precipitation extremes due to global warming. It is likely that extreme precipitation intensity will increase in a future climate and that extreme events will be more frequent. In this work, annual maxima daily precipitation series from the Canadian Earth System Model (CanESM2) 50-member large ensemble (spatial resolution of 2.8°x2.8°) and the Community Earth System Model (CESM1) 40-member large ensemble (spatial resolution of 1°x1°) are used to investigate extreme precipitation over the historical (1980-2010) and future (2070-2100) periods. The use of these ensembles results in respectively 1 500 (30 years x 50 members) and 1200 (30 years x 40 members) simulated years over both the historical and future periods. These large datasets allow the computation of empirical daily extreme precipitation quantiles for large return periods. Using the CanESM2 and CESM1 large ensembles, extreme daily precipitation with return periods ranging from 2 to 100 years are computed in historical and future periods to assess the impact of climate change. Results indicate that daily precipitation extremes generally increase in the future over most land grid points and that these increases will also impact the 100-year extreme daily precipitation. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety. Estimated increases in precipitation associated to very extreme precipitation events (e.g. 100 years) will drastically change the likelihood of flooding and their extent in future climate. These results, although interesting, need to be extended to sub-daily durations, relevant for urban flooding protection and urban infrastructure design (e.g. sewer networks, culverts). Models and simulations at finer spatial and temporal resolution are therefore needed.

  5. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues.

    PubMed

    Farisco, Michele; Kotaleski, Jeanette H; Evers, Kathinka

    2018-01-01

    Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain's operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs), e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.

  6. Microscale simulations of shock interaction with large assembly of particles for developing point-particle models

    NASA Astrophysics Data System (ADS)

    Thakur, Siddharth; Neal, Chris; Mehta, Yash; Sridharan, Prasanth; Jackson, Thomas; Balachandar, S.

    2017-01-01

    Micrsoscale simulations are being conducted for developing point-particle and other related models that are needed for the mesoscale and macroscale simulations of explosive dispersal of particles. These particle models are required to compute (a) instantaneous aerodynamic force on the particle and (b) instantaneous net heat transfer between the particle and the surrounding. A strategy for a sequence of microscale simulations has been devised that allows systematic development of the hybrid surrogate models that are applicable at conditions representative of the explosive dispersal application. The ongoing microscale simulations seek to examine particle force dependence on: (a) Mach number, (b) Reynolds number, and (c) volume fraction (different particle arrangements such as cubic, face-centered cubic (FCC), body-centered cubic (BCC) and random). Future plans include investigation of sequences of fully-resolved microscale simulations consisting of an array of particles subjected to more realistic time-dependent flows that progressively better approximate the actual problem of explosive dispersal. Additionally, effects of particle shape, size, and number in simulation as well as the transient particle deformation dependence on various parameters including: (a) particle material, (b) medium material, (c) multiple particles, (d) incoming shock pressure and speed, (e) medium to particle impedance ratio, (f) particle shape and orientation to shock, etc. are being investigated.

  7. Large eddy simulation for atmospheric boundary layer flow over flat and complex terrains

    NASA Astrophysics Data System (ADS)

    Han, Yi; Stoellinger, Michael; Naughton, Jonathan

    2016-09-01

    In this work, we present Large Eddy Simulation (LES) results of atmospheric boundary layer (ABL) flow over complex terrain with neutral stratification using the OpenFOAM-based simulator for on/offshore wind farm applications (SOWFA). The complete work flow to investigate the LES for the ABL over real complex terrain is described including meteorological-tower data analysis, mesh generation and case set-up. New boundary conditions for the lateral and top boundaries are developed and validated to allow inflow and outflow as required in complex terrain simulations. The turbulent inflow data for the terrain simulation is generated using a precursor simulation of a flat and neutral ABL. Conditionally averaged met-tower data is used to specify the conditions for the flat precursor simulation and is also used for comparison with the simulation results of the terrain LES. A qualitative analysis of the simulation results reveals boundary layer separation and recirculation downstream of a prominent ridge that runs across the simulation domain. Comparisons of mean wind speed, standard deviation and direction between the computed results and the conditionally averaged tower data show a reasonable agreement.

  8. UH-60A Black Hawk engineering simulation program. Volume 1: Mathematical model

    NASA Technical Reports Server (NTRS)

    Howlett, J. J.

    1981-01-01

    A nonlinear mathematical model of the UR-60A Black Hawk helicopter was developed. This mathematical model, which was based on the Sikorsky General Helicopter (Gen Hel) Flight Dynamics Simulation, provides NASA with an engineering simulation for performance and handling qualities evaluations. This mathematical model is total systems definition of the Black Hawk helicopter represented at a uniform level of sophistication considered necessary for handling qualities evaluations. The model is a total force, large angle representation in six rigid body degrees of freedom. Rotor blade flapping, lagging, and hub rotational degrees of freedom are also represented. In addition to the basic helicopter modules, supportive modules were defined for the landing interface, power unit, ground effects, and gust penetration. Information defining the cockpit environment relevant to pilot in the loop simulation is presented.

  9. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  10. Volume and tissue composition preserving deformation of breast CT images to simulate breast compression in mammographic imaging

    NASA Astrophysics Data System (ADS)

    Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.

    2009-02-01

    Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.

  11. Testing large volume water treatment and crude oil ...

    EPA Pesticide Factsheets

    Report EPA’s Homeland Security Research Program (HSRP) partnered with the Idaho National Laboratory (INL) to build the Water Security Test Bed (WSTB) at the INL test site outside of Idaho Falls, Idaho. The WSTB was built using an 8-inch (20 cm) diameter cement-mortar lined drinking water pipe that was previously taken out of service. The pipe was exhumed from the INL grounds and oriented in the shape of a small drinking water distribution system. Effluent from the pipe is captured in a lagoon. The WSTB can support drinking water distribution system research on a variety of drinking water treatment topics including biofilms, water quality, sensors, and homeland security related contaminants. Because the WSTB is constructed of real drinking water distribution system pipes, research can be conducted under conditions similar to those in a real drinking water system. In 2014, WSTB pipe was experimentally contaminated with Bacillus globigii spores, a non-pathogenic surrogate for the pathogenic B. anthracis, and then decontaminated using chlorine dioxide. In 2015, the WSTB was used to perform the following experiments: • Four mobile disinfection technologies were tested for their ability to disinfect large volumes of biologically contaminated “dirty” water from the WSTB. B. globigii spores acted as the biological contaminant. The four technologies evaluated included: (1) Hayward Saline C™ 6.0 Chlorination System, (2) Advanced Oxidation Process (A

  12. Large-eddy simulation of a turbulent mixing layer

    NASA Technical Reports Server (NTRS)

    Mansour, N. N.; Ferziger, J. H.; Reynolds, W. C.

    1978-01-01

    The three dimensional, time dependent (incompressible) vorticity equations were used to simulate numerically the decay of isotropic box turbulence and time developing mixing layers. The vorticity equations were spatially filtered to define the large scale turbulence field, and the subgrid scale turbulence was modeled. A general method was developed to show numerical conservation of momentum, vorticity, and energy. The terms that arise from filtering the equations were treated (for both periodic boundary conditions and no stress boundary conditions) in a fast and accurate way by using fast Fourier transforms. Use of vorticity as the principal variable is shown to produce results equivalent to those obtained by use of the primitive variable equations.

  13. Construction and Start-up of a Large-Volume Thermostat for Dielectric-Constant Gas Thermometry

    NASA Astrophysics Data System (ADS)

    Merlone, A.; Moro, F.; Zandt, T.; Gaiser, C.; Fellmuth, B.

    2010-07-01

    A liquid-bath thermostat with a volume of about 800 L was designed to provide a suitable thermal environment for a dielectric-constant gas thermometer (DCGT) in the range from the triple point of mercury to the melting point of gallium. In the article, results obtained with the unique, huge thermostat without the DCGT measuring chamber are reported to demonstrate the capability of controlling the temperature of very large systems at a metrological level. First tests showed that the bath together with its temperature controller provide a temperature variation of less than ±0.5mK peak-to-peak. This temperature instability could be maintained over a period of several days. In the central working volume (diameter—500mm, height—650mm), in which the vacuum chamber containing the measuring system of the DCGT will be placed later, the temperature inhomogeneity has been demonstrated to be also well below 1mK.

  14. Large eddy simulation on Rayleigh–Bénard convection of cold water in the neighborhood of the maximum density

    NASA Astrophysics Data System (ADS)

    Huang, Xiao-Jie; Zhang, Li; Hu, Yu-Peng; Li, You-Rong

    2018-06-01

    In order to understand the effect of the Rayleigh number, the density inversion phenomenon and the aspect ratio on the flow patterns and the heat transfer characteristics of Rayleigh–Bénard convection of cold water in the neighborhood of the maximum density, a series of large eddy simulations are conducted by using the finite volume method. The Rayleigh number ranges between 106 and 109, the density inversion parameter and the aspect ratio are varied from 0 to 0.9 and from 0.4 to 2.5, respectively. The results indicate that the reversal of the large scale circulation (LSC) occurs with the increase of the Rayleigh number. When there exists a density inversion phenomenon, the key driver for the LSC is hot plumes. When the density inversion parameter is large enough, a stagnant region is found near the top of the container as the hot plumes cannot move to the top wall. The flow pattern structures depend mainly on the aspect ratio. When the aspect ratio is small, the rolls are vertically stacked and the flow keeps on switching among different flow states. For a moderate aspect ratio, different long-lived roll states coexist at a fixed aspect ratio. For a larger aspect ratio, the flow state is everlasting. The number of rolls increases with the increase of the aspect ratio. Furthermore, the aspect ratio has only slight influence on the time averaged Nusselt number for all density inversion parameters.

  15. Large eddy simulation study of the kinetic energy entrainment by energetic turbulent flow structures in large wind farms

    NASA Astrophysics Data System (ADS)

    VerHulst, Claire; Meneveau, Charles

    2014-02-01

    In this study, we address the question of how kinetic energy is entrained into large wind turbine arrays and, in particular, how large-scale flow structures contribute to such entrainment. Previous research has shown this entrainment to be an important limiting factor in the performance of very large arrays where the flow becomes fully developed and there is a balance between the forcing of the atmospheric boundary layer and the resistance of the wind turbines. Given the high Reynolds numbers and domain sizes on the order of kilometers, we rely on wall-modeled large eddy simulation (LES) to simulate turbulent flow within the wind farm. Three-dimensional proper orthogonal decomposition (POD) analysis is then used to identify the most energetic flow structures present in the LES data. We quantify the contribution of each POD mode to the kinetic energy entrainment and its dependence on the layout of the wind turbine array. The primary large-scale structures are found to be streamwise, counter-rotating vortices located above the height of the wind turbines. While the flow is periodic, the geometry is not invariant to all horizontal translations due to the presence of the wind turbines and thus POD modes need not be Fourier modes. Differences of the obtained modes with Fourier modes are documented. Some of the modes are responsible for a large fraction of the kinetic energy flux to the wind turbine region. Surprisingly, more flow structures (POD modes) are needed to capture at least 40% of the turbulent kinetic energy, for which the POD analysis is optimal, than are needed to capture at least 40% of the kinetic energy flux to the turbines. For comparison, we consider the cases of aligned and staggered wind turbine arrays in a neutral atmospheric boundary layer as well as a reference case without wind turbines. While the general characteristics of the flow structures are robust, the net kinetic energy entrainment to the turbines depends on the presence and relative

  16. Interactive Exploration and Analysis of Large-Scale Simulations Using Topology-Based Data Segmentation.

    PubMed

    Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B

    2011-09-01

    Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting

  17. Dynamic large eddy simulation: Stability via realizability

    NASA Astrophysics Data System (ADS)

    Mokhtarpoor, Reza; Heinz, Stefan

    2017-10-01

    The concept of dynamic large eddy simulation (LES) is highly attractive: such methods can dynamically adjust to changing flow conditions, which is known to be highly beneficial. For example, this avoids the use of empirical, case dependent approximations (like damping functions). Ideally, dynamic LES should be local in physical space (without involving artificial clipping parameters), and it should be stable for a wide range of simulation time steps, Reynolds numbers, and numerical schemes. These properties are not trivial, but dynamic LES suffers from such problems over decades. We address these questions by performing dynamic LES of periodic hill flow including separation at a high Reynolds number Re = 37 000. For the case considered, the main result of our studies is that it is possible to design LES that has the desired properties. It requires physical consistency: a PDF-realizable and stress-realizable LES model, which requires the inclusion of the turbulent kinetic energy in the LES calculation. LES models that do not honor such physical consistency can become unstable. We do not find support for the previous assumption that long-term correlations of negative dynamic model parameters are responsible for instability. Instead, we concluded that instability is caused by the stable spatial organization of significant unphysical states, which are represented by wall-type gradient streaks of the standard deviation of the dynamic model parameter. The applicability of our realizability stabilization to other dynamic models (including the dynamic Smagorinsky model) is discussed.

  18. Large-scale tidal effect on redshift-space power spectrum in a finite-volume survey

    NASA Astrophysics Data System (ADS)

    Akitsu, Kazuyuki; Takada, Masahiro; Li, Yin

    2017-04-01

    Long-wavelength matter inhomogeneities contain cleaner information on the nature of primordial perturbations as well as the physics of the early Universe. The large-scale coherent overdensity and tidal force, not directly observable for a finite-volume galaxy survey, are both related to the Hessian of large-scale gravitational potential and therefore are of equal importance. We show that the coherent tidal force causes a homogeneous anisotropic distortion of the observed distribution of galaxies in all three directions, perpendicular and parallel to the line-of-sight direction. This effect mimics the redshift-space distortion signal of galaxy peculiar velocities, as well as a distortion by the Alcock-Paczynski effect. We quantify its impact on the redshift-space power spectrum to the leading order, and discuss its importance for ongoing and upcoming galaxy surveys.

  19. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE PAGES

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef; ...

    2016-10-01

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  20. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  1. Research on volume metrology method of large vertical energy storage tank based on internal electro-optical distance-ranging method

    NASA Astrophysics Data System (ADS)

    Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang

    2018-01-01

    A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.

  2. Effect of filtration rates on hollow fiber ultrafilter concentration of viruses and protozoans from large volumes of water

    EPA Science Inventory

    Aims: To describe the ability of tangential flow hollow-fiber ultrafiltration to recover viruses from large volumes of water when run either at high filtration rates or lower filtration rates and recover Cryptosporidium parvum at high filtration rates. Methods and Results: Wate...

  3. Electro-Mechanical Simulation of a Large Aperture MOEMS Fabry-Perot Tunable Filter

    NASA Technical Reports Server (NTRS)

    Kuhn, Jonathan L.; Barclay, Richard B.; Greenhouse, Matthew A.; Mott, D. Brent; Satyapal, Shobita; Powers, Edward I. (Technical Monitor)

    2000-01-01

    We are developing a micro-machined electrostatically actuated Fabry-Perot tunable filter with a large clear aperture for application in high through-put wide-field imaging spectroscopy and lidar systems. In the first phase of this effort, we are developing key components based on coupled electro-mechanical simulations. In particular, the movable etalon plate design leverages high coating stresses to yield a flat surface in drum-head tension over a large diameter (12.5 mm). In this approach, the cylindrical silicon movable plate is back etched, resulting in an optically coated membrane that is suspended from a thick silicon support ring. Understanding the interaction between the support ring, suspended membrane, and coating is critical to developing surfaces that are flat to within stringent etalon requirements. In this work, we present the simulations used to develop the movable plate, spring suspension system, and electrostatic actuation mechanism. We also present results from tests of fabricated proof of concept components.

  4. Systems and methods for the detection of low-level harmful substances in a large volume of fluid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Michael V.; Roybal, Lyle G.; Lindquist, Alan

    A method and device for the detection of low-level harmful substances in a large volume of fluid comprising using a concentrator system to produce a retentate and analyzing the retentate for the presence of at least one harmful substance. The concentrator system performs a method comprising pumping at least 10 liters of fluid from a sample source through a filter. While pumping, the concentrator system diverts retentate from the filter into a container. The concentrator system also recirculates at least part of the retentate in the container again through the filter. The concentrator system controls the speed of the pumpmore » with a control system thereby maintaining a fluid pressure less than 25 psi during the pumping of the fluid; monitors the quantity of retentate within the container with a control system, and maintains a reduced volume level of retentate and a target volume of retentate.« less

  5. Plasma volume losses during simulated weightlessness in women

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drew, H.; Fortney, S.; La France, N.

    Six healthy women not using oral contraceptives underwent two 11-day intervals of complete bedrest (BR) with the BR periods separated by 4 weeks of ambulatory control. Change in plasma volume (PV) was monitored during BR to test the hypothesis that these women would show a smaller decrease in PV than PV values reported in similarly stressed men due to the water retaining effects of the female hormones. Bedrest periods were timed to coincide with opposing stages of the menstrual cycle in each woman. The menstrual cycle was divided into 4 separate stages; early follicular, ovulatory, early luteal, and late lutealmore » phases. The percent decrease of PV showed a consistent decrease for each who began BR while in stage 1, 3 or 4 of the menstrual cycle. However, the females who began in stage 2 showed a transient attenuation in PV loss. Overall, PV changes seen in women during BR were similar to those reported for men. The water-retaining effects of menstrual hormones were evident only during the high estrogen ovulatory stage. The authors conclude the protective effects of menstrual hormones on PV losses during simulated weightless conditions appear to be only small and transient.« less

  6. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues

    PubMed Central

    Farisco, Michele; Kotaleski, Jeanette H.; Evers, Kathinka

    2018-01-01

    Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain’s operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs), e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs. PMID:29740372

  7. Rugged large volume injection for sensitive capillary LC-MS environmental monitoring

    NASA Astrophysics Data System (ADS)

    Roberg-Larsen, Hanne; Abele, Silvija; Demir, Deniz; Dzabijeva, Diana; Amundsen, Sunniva F.; Wilson, Steven R.; Bartkevics, Vadims; Lundanes, Elsa

    2017-08-01

    A rugged and high throughput capillary column (cLC) LC-MS switching platform using large volume injection and on-line automatic filtration and filter back-flush (AFFL) solid phase extraction (SPE) for analysis of environmental water samples with minimal sample preparation is presented. Although narrow columns and on-line sample preparation are used in the platform, high ruggedness is achieved e.g. injection of 100 non-filtrated water samples would did not result in a pressure rise/clogging of the SPE/capillary columns (inner diameter 300 µm). In addition, satisfactory retention time stability and chromatographic resolution were also features of the system. The potential of the platform for environmental water samples was demonstrated with various pharmaceutical products, which had detection limits (LOD) in the 0.05 - 12.5 ng/L range. Between-day and within-day repeatability of selected analytes were < 20% RSD.

  8. An open-loop controlled active lung simulator for preterm infants.

    PubMed

    Cecchini, Stefano; Schena, Emiliano; Silvestri, Sergio

    2011-01-01

    We describe the underlying theory, design and experimental evaluation of an electromechanical analogue infant lung to simulate spontaneous breathing patterns of preterm infants. The aim of this work is to test the possibility to obtain breathing patterns of preterm infants by taking into consideration the air compressibility. Respiratory volume function represents the actuation pattern, and pulmonary pressure and flow-rate waveforms are mathematically obtained through the application of the perfect gas and adiabatic laws. The mathematical model reduces the simulation interval into a step shorter than 1 ms, allowing to consider an entire respiratory act as composed of a large number of almost instantaneous adiabatic transformations. The device consists of a spherical chamber where the air is compressed by four cylinder-pistons, moved by stepper motors, and flows through a fluid-dynamic resistance, which also works as flow-rate sensor. Specifically designed software generates the actuators motion, based on the desired ventilation parameters, without controlling the gas pneumatic parameters with a closed-loop. The system is able to simulate tidal volumes from 3 to 8 ml, breathing frequencies from 60 to 120 bpm and functional residual capacities from 25 to 80 ml. The simulated waveforms appear very close to the measured ones. Percentage differences on the tidal volume waveform vary from 7% for the tidal volume of 3 ml, down to 2.2-3.5% for tidal volumes in the range of 4-7 ml, and 1.3% for the tidal volume equal to 8 ml in the whole breathing frequency and functional residual capacity ranges. The open-loop electromechanical simulator shows that gas compressibility can be theoretically assessed in the typical pneumatic variable range of preterm infant respiratory mechanics. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  10. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  11. Simulations and Characteristics of Large Solar Events Propagating Throughout the Heliosphere and Beyond (Invited)

    NASA Astrophysics Data System (ADS)

    Intriligator, D. S.; Sun, W.; Detman, T. R.; Dryer, Ph D., M.; Intriligator, J.; Deehr, C. S.; Webber, W. R.; Gloeckler, G.; Miller, W. D.

    2015-12-01

    Large solar events can have severe adverse global impacts at Earth. These solar events also can propagate throughout the heliopshere and into the interstellar medium. We focus on the July 2012 and Halloween 2003 solar events. We simulate these events starting from the vicinity of the Sun at 2.5 Rs. We compare our three dimensional (3D) time-dependent simulations to available spacecraft (s/c) observations at 1 AU and beyond. Based on the comparisons of the predictions from our simulations with in-situ measurements we find that the effects of these large solar events can be observed in the outer heliosphere, the heliosheath, and even into the interstellar medium. We use two simulation models. The HAFSS (HAF Source Surface) model is a kinematic model. HHMS-PI (Hybrid Heliospheric Modeling System with Pickup protons) is a numerical magnetohydrodynamic solar wind (SW) simulation model. Both HHMS-PI and HAFSS are ideally suited for these analyses since starting at 2.5 Rs from the Sun they model the slowly evolving background SW and the impulsive, time-dependent events associated with solar activity. Our models naturally reproduce dynamic 3D spatially asymmetric effects observed throughout the heliosphere. Pre-existing SW background conditions have a strong influence on the propagation of shock waves from solar events. Time-dependence is a crucial aspect of interpreting s/c data. We show comparisons of our simulation results with STEREO A, ACE, Ulysses, and Voyager s/c observations.

  12. Large-Eddy Simulation of Coherent Flow Structures within a Cubical Canopy

    NASA Astrophysics Data System (ADS)

    Inagaki, Atsushi; Castillo, Marieta Cristina L.; Yamashita, Yoshimi; Kanda, Manabu; Takimoto, Hiroshi

    2012-02-01

    Instantaneous flow structures "within" a cubical canopy are investigated via large-eddy simulation. The main topics of interest are, (1) large-scale coherent flow structures within a cubical canopy, (2) how the structures are coupled with the turbulent organized structures (TOS) above them, and (3) the classification and quantification of representative instantaneous flow patterns within a street canyon in relation to the coherent structures. We use a large numerical domain (2,560 m × 2,560 m × 1,710 m) with a fine spatial resolution (2.5 m), thereby simulating a complete daytime atmospheric boundary layer (ABL), as well as explicitly resolving a regular array of cubes (40 m in height) at the surface. A typical urban ABL is numerically modelled. In this situation, the constant heat supply from roof and floor surfaces sustains a convective mixed layer as a whole, but strong wind shear near the canopy top maintains the surface layer nearly neutral. The results reveal large coherent structures in both the velocity and temperature fields "within" the canopy layer. These structures are much larger than the cubes, and their shapes and locations are shown to be closely related to the TOS above them. We classify the instantaneous flow patterns in a cavity, specifically focusing on two characteristic flow patterns: flushing and cavity-eddy events. Flushing indicates a strong upward motion, while a cavity eddy is characterized by a dominant vortical motion within a single cavity. Flushing is clearly correlated with the TOS above, occurring frequently beneath low-momentum streaks. The instantaneous momentum and heat transport within and above a cavity due to flushing and cavity-eddy events are also quantified.

  13. Large eddy simulation applications in gas turbines.

    PubMed

    Menzies, Kevin

    2009-07-28

    The gas turbine presents significant challenges to any computational fluid dynamics techniques. The combination of a wide range of flow phenomena with complex geometry is difficult to model in the context of Reynolds-averaged Navier-Stokes (RANS) solvers. We review the potential for large eddy simulation (LES) in modelling the flow in the different components of the gas turbine during a practical engineering design cycle. We show that while LES has demonstrated considerable promise for reliable prediction of many flows in the engine that are difficult for RANS it is not a panacea and considerable application challenges remain. However, for many flows, especially those dominated by shear layer mixing such as in combustion chambers and exhausts, LES has demonstrated a clear superiority over RANS for moderately complex geometries although at significantly higher cost which will remain an issue in making the calculations relevant within the design cycle.

  14. Wind turbine wakes in forest and neutral plane wall boundary layer large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Schröttle, Josef; Piotrowski, Zbigniew; Gerz, Thomas; Englberger, Antonia; Dörnbrack, Andreas

    2016-09-01

    Wind turbine wake flow characteristics are studied in a strongly sheared and turbulent forest boundary layer and a neutral plane wall boundary layer flow. The reference simulations without wind turbine yield similar results as earlier large-eddy simulations by Shaw and Schumann (1992) and Porte-Agel et al. (2000). To use the fields from the homogeneous turbulent boundary layers on the fly as inflow fields for the wind turbine wake simulations, a new and efficient methodology was developed for the multiscale geophysical flow solver EULAG. With this method fully developed turbulent flow fields can be achieved upstream of the wind turbine which are independent of the wake flow. The large-eddy simulations reproduce known boundary-layer statistics as mean wind profile, momentum flux profile, and eddy dissipation rate of the plane wall and the forest boundary layer. The wake velocity deficit is more asymmetric above the forest and recovers faster downstream compared to the velocity deficit in the plane wall boundary layer. This is due to the inflection point in the mean streamwise velocity profile with corresponding turbulent coherent structures of high turbulence intensity in the strong shear flow above the forest.

  15. Large Eddy Simulation of a Film Cooling Technique with a Plenum

    NASA Astrophysics Data System (ADS)

    Dharmarathne, Suranga; Sridhar, Narendran; Araya, Guillermo; Castillo, Luciano; Parameswaran, Sivapathasund

    2012-11-01

    Factors that affect the film cooling performance have been categorized into three main groups: (i) coolant & mainstream conditions, (ii) hole geometry & configuration, and (iii) airfoil geometry Bogard et al. (2006). The present study focuses on the second group of factors, namely, the modeling of coolant hole and the plenum. It is required to simulate correct physics of the problem to achieve more realistic numerical results. In this regard, modeling of cooling jet hole and the plenum chamber is highly important Iourokina et al. (2006). Substitution of artificial boundary conditions instead of correct plenum design would yield unrealistic results Iourokina et al. (2006). This study attempts to model film cooling technique with a plenum using a Large Eddy Simulation.Incompressible coolant jet ejects to the surface of the plate at an angle of 30° where it meets compressible turbulent boundary layer which simulates the turbine inflow conditions. Dynamic multi-scale approach Araya (2011) is introduced to prescribe turbulent inflow conditions. Simulations are carried out for two different blowing ratios and film cooling effectiveness is calculated for both cases. Results obtained from LES will be compared with experimental results.

  16. Bistability: Requirements on Cell-Volume, Protein Diffusion, and Thermodynamics

    PubMed Central

    Endres, Robert G.

    2015-01-01

    Bistability is considered wide-spread among bacteria and eukaryotic cells, useful e.g. for enzyme induction, bet hedging, and epigenetic switching. However, this phenomenon has mostly been described with deterministic dynamic or well-mixed stochastic models. Here, we map known biological bistable systems onto the well-characterized biochemical Schlögl model, using analytical calculations and stochastic spatiotemporal simulations. In addition to network architecture and strong thermodynamic driving away from equilibrium, we show that bistability requires fine-tuning towards small cell volumes (or compartments) and fast protein diffusion (well mixing). Bistability is thus fragile and hence may be restricted to small bacteria and eukaryotic nuclei, with switching triggered by volume changes during the cell cycle. For large volumes, single cells generally loose their ability for bistable switching and instead undergo a first-order phase transition. PMID:25874711

  17. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    NASA Astrophysics Data System (ADS)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  18. Contrail Formation in Aircraft Wakes Using Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Paoli, R.; Helie, J.; Poinsot, T. J.; Ghosal, S.

    2002-01-01

    In this work we analyze the issue of the formation of condensation trails ("contrails") in the near-field of an aircraft wake. The basic configuration consists in an exhaust engine jet interacting with a wing-tip training vortex. The procedure adopted relies on a mixed Eulerian/Lagrangian two-phase flow approach; a simple micro-physics model for ice growth has been used to couple ice and vapor phases. Large eddy simulations have carried out at a realistic flight Reynolds number to evaluate the effects of turbulent mixing and wake vortex dynamics on ice-growth characteristics and vapor thermodynamic properties.

  19. Simulation-optimization of large agro-hydrosystems using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Schuetze, Niels; Grundmann, Jens

    2014-05-01

    In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.

  20. Large volume serial section tomography by Xe Plasma FIB dual beam microscopy.

    PubMed

    Burnett, T L; Kelley, R; Winiarski, B; Contreras, L; Daly, M; Gholinia, A; Burke, M G; Withers, P J

    2016-02-01

    Ga(+) Focused Ion Beam-Scanning Electron Microscopes (FIB-SEM) have revolutionised the level of microstructural information that can be recovered in 3D by block face serial section tomography (SST), as well as enabling the site-specific removal of smaller regions for subsequent transmission electron microscope (TEM) examination. However, Ga(+) FIB material removal rates limit the volumes and depths that can be probed to dimensions in the tens of microns range. Emerging Xe(+) Plasma Focused Ion Beam-Scanning Electron Microscope (PFIB-SEM) systems promise faster removal rates. Here we examine the potential of the method for large volume serial section tomography as applied to bainitic steel and WC-Co hard metals. Our studies demonstrate that with careful control of milling parameters precise automated serial sectioning can be achieved with low levels of milling artefacts at removal rates some 60× faster. Volumes that are hundreds of microns in dimension have been collected using fully automated SST routines in feasible timescales (<24h) showing good grain orientation contrast and capturing microstructural features at the tens of nanometres to the tens of microns scale. Accompanying electron back scattered diffraction (EBSD) maps show high indexing rates suggesting low levels of surface damage. Further, under high current Ga(+) FIB milling WC-Co is prone to amorphisation of WC surface layers and phase transformation of the Co phase, neither of which have been observed at PFIB currents as high as 60nA at 30kV. Xe(+) PFIB dual beam microscopes promise to radically extend our capability for 3D tomography, 3D EDX, 3D EBSD as well as correlative tomography. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  2. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    NASA Astrophysics Data System (ADS)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  3. Lagrangian large eddy simulations of boundary layer clouds on ERA-Interim and ERA5 trajectories

    NASA Astrophysics Data System (ADS)

    Kazil, J.; Feingold, G.; Yamaguchi, T.

    2017-12-01

    This exploratory study examines Lagrangian large eddy simulations of boundary layer clouds along wind trajectories from the ERA-Interim and ERA5 reanalyses. The study is motivated by the need for statistically representative sets of high resolution simulations of cloud field evolution in realistic meteorological conditions. The study will serve as a foundation for the investigation of biomass burning effects on the transition from stratocumulus to shallow cumulus clouds in the South-East Atlantic. Trajectories that pass through a location with radiosonde data (St. Helena) and which exhibit a well-defined cloud structure and evolution were identified in satellite imagery, and sea surface temperature and atmospheric vertical profiles along the trajectories were extracted from the reanalysis data sets. The System for Atmospheric Modeling (SAM) simulated boundary layer turbulence and cloud properties along the trajectories. Mean temperature and moisture (in the free troposphere) and mean wind speed (at all levels) were nudged towards the reanalysis data. Atmospheric and cloud properties in the large eddy simulations were compared with those from the reanalysis products, and evaluated with satellite imagery and radiosonde data. Simulations using ERA-Interim data and the higher resolution ERA5 data are contrasted.

  4. Ssalmon - The Solar Simulations For The Atacama Large Millimeter Observatory Network

    NASA Astrophysics Data System (ADS)

    Wedemeyer, Sven; Ssalmon Group

    2016-07-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) provides a new powerful tool for observing the solar chromosphere at high spatial, temporal, and spectral resolution, which will allow for addressing a wide range of scientific topics in solar physics. Numerical simulations of the solar atmosphere and modeling of instrumental effects are valuable tools for constraining, preparing and optimizing future observations with ALMA and for interpreting the results. In order to co-ordinate related activities, the Solar Simulations for the Atacama Large Millimeter Observatory Network (SSALMON) was initiated on September 1st, 2014, in connection with the NA- and EU-led solar ALMA development studies. As of April, 2015, SSALMON has grown to 83 members from 18 countries (plus ESO and ESA). Another important goal of SSALMON is to promote the scientific potential of solar science with ALMA, which has resulted in two major publications so far. During 2015, the SSALMON Expert Teams produced a White Paper with potential science cases for Cycle 4, which will be the first time regular solar observations will be carried out. Registration and more information at http://www.ssalmon.uio.no.

  5. Large-eddy simulations of turbulent flow for grid-to-rod fretting in nuclear reactors

    DOE PAGES

    Bakosi, J.; Christon, M. A.; Lowrie, R. B.; ...

    2013-07-12

    The grid-to-rod fretting (GTRF) problem in pressurized water reactors is a flow-induced vibration problem that results in wear and failure of the fuel rods in nuclear assemblies. In order to understand the fluid dynamics of GTRF and to build an archival database of turbulence statistics for various configurations, implicit large-eddy simulations of time-dependent single-phase turbulent flow have been performed in 3 × 3 and 5 × 5 rod bundles with a single grid spacer. To assess the computational mesh and resolution requirements, a method for quantitative assessment of unstructured meshes with no-slip walls is described. The calculations have been carriedmore » out using Hydra-TH, a thermal-hydraulics code developed at Los Alamos for the Consortium for Advanced Simulation of Light water reactors, a United States Department of Energy Innovation Hub. Hydra-TH uses a second-order implicit incremental projection method to solve the singlephase incompressible Navier-Stokes equations. The simulations explicitly resolve the large scale motions of the turbulent flow field using first principles and rely on a monotonicity-preserving numerical technique to represent the unresolved scales. Each series of simulations for the 3 × 3 and 5 × 5 rod-bundle geometries is an analysis of the flow field statistics combined with a mesh-refinement study and validation with available experimental data. Our primary focus is the time history and statistics of the forces loading the fuel rods. These hydrodynamic forces are believed to be the key player resulting in rod vibration and GTRF wear, one of the leading causes for leaking nuclear fuel which costs power utilities millions of dollars in preventive measures. As a result, we demonstrate that implicit large-eddy simulation of rod-bundle flows is a viable way to calculate the excitation forces for the GTRF problem.« less

  6. An engineering closure for heavily under-resolved coarse-grid CFD in large applications

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Yu, Fujiang; Jordan, Thomas

    2016-11-01

    Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.

  7. Resolving Low-Density Lipoprotein (LDL) on the Human Aortic Surface Using Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Lantz, Jonas; Karlsson, Matts

    2011-11-01

    The prediction and understanding of the genesis of vascular diseases is one of the grand challenges in biofluid engineering. The progression of atherosclerosis is correlated to the build- up of LDL on the arterial surface, which is affected by the blood flow. A multi-physics simulation of LDL mass transport in the blood and through the arterial wall of a subject specific human aorta was performed, employing a LES turbulence model to resolve the turbulent flow. Geometry and velocity measurements from magnetic resonance imaging (MRI) were incorporated to assure physiological relevance of the simulation. Due to the turbulent nature of the flow, consecutive cardiac cycles are not identical, neither in vivo nor in the simulations. A phase average based on a large number of cardiac cycles is therefore computed, which is the proper way to get reliable statistical results from a LES simulation. In total, 50 cardiac cycles were simulated, yielding over 2.5 Billion data points to be post-processed. An inverse relation between LDL and WSS was found; LDL accumulated on locations where WSS was low and vice-versa. Large temporal differences were present, with the concentration level decreasing during systolic acceleration and increasing during the deceleration phase. This method makes it possible to resolve the localization of LDL accumulation in the normal human aorta with its complex transitional flow.

  8. Improved turbulence models based on large eddy simulation of homogeneous, incompressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Bardino, J.; Ferziger, J. H.; Reynolds, W. C.

    1983-01-01

    The physical bases of large eddy simulation and subgrid modeling are studied. A subgrid scale similarity model is developed that can account for system rotation. Large eddy simulations of homogeneous shear flows with system rotation were carried out. Apparently contradictory experimental results were explained. The main effect of rotation is to increase the transverse length scales in the rotation direction, and thereby decrease the rates of dissipation. Experimental results are shown to be affected by conditions at the turbulence producing grid, which make the initial states a function of the rotation rate. A two equation model is proposed that accounts for effects of rotation and shows good agreement with experimental results. In addition, a Reynolds stress model is developed that represents the turbulence structure of homogeneous shear flows very well and can account also for the effects of system rotation.

  9. Using stroboscopic flow imaging to validate large-scale computational fluid dynamics simulations

    NASA Astrophysics Data System (ADS)

    Laurence, Ted A.; Ly, Sonny; Fong, Erika; Shusteff, Maxim; Randles, Amanda; Gounley, John; Draeger, Erik

    2017-02-01

    The utility and accuracy of computational modeling often requires direct validation against experimental measurements. The work presented here is motivated by taking a combined experimental and computational approach to determine the ability of large-scale computational fluid dynamics (CFD) simulations to understand and predict the dynamics of circulating tumor cells in clinically relevant environments. We use stroboscopic light sheet fluorescence imaging to track the paths and measure the velocities of fluorescent microspheres throughout a human aorta model. Performed over complex physiologicallyrealistic 3D geometries, large data sets are acquired with microscopic resolution over macroscopic distances.

  10. Large-scale coherent structures of suspended dust concentration in the neutral atmospheric surface layer: A large-eddy simulation study

    NASA Astrophysics Data System (ADS)

    Zhang, Yangyue; Hu, Ruifeng; Zheng, Xiaojing

    2018-04-01

    Dust particles can remain suspended in the atmospheric boundary layer, motions of which are primarily determined by turbulent diffusion and gravitational settling. Little is known about the spatial organizations of suspended dust concentration and how turbulent coherent motions contribute to the vertical transport of dust particles. Numerous studies in recent years have revealed that large- and very-large-scale motions in the logarithmic region of laboratory-scale turbulent boundary layers also exist in the high Reynolds number atmospheric boundary layer, but their influence on dust transport is still unclear. In this study, numerical simulations of dust transport in a neutral atmospheric boundary layer based on an Eulerian modeling approach and large-eddy simulation technique are performed to investigate the coherent structures of dust concentration. The instantaneous fields confirm the existence of very long meandering streaks of dust concentration, with alternating high- and low-concentration regions. A strong negative correlation between the streamwise velocity and concentration and a mild positive correlation between the vertical velocity and concentration are observed. The spatial length scales and inclination angles of concentration structures are determined, compared with their flow counterparts. The conditionally averaged fields vividly depict that high- and low-concentration events are accompanied by a pair of counter-rotating quasi-streamwise vortices, with a downwash inside the low-concentration region and an upwash inside the high-concentration region. Through the quadrant analysis, it is indicated that the vertical dust transport is closely related to the large-scale roll modes, and ejections in high-concentration regions are the major mechanisms for the upward motions of dust particles.

  11. Large-eddy simulation of flow in a plane, asymmetric diffuser

    NASA Technical Reports Server (NTRS)

    Kaltenbach, Hans-Jakob

    1993-01-01

    Recent improvements in subgrid-scale modeling as well as increases in computer power make it feasible to investigate flows using large-eddy simulation (LES) which have been traditionally studied with techniques based on Reynolds averaging. However, LES has not yet been applied to many flows of immediate technical interest. Preliminary results from LES of a plane diffuser flow are described. The long term goal of this work is to investigate flow separation as well as separation control in ducts and ramp-like geometries.

  12. On the large eddy simulation of turbulent flows in complex geometry

    NASA Technical Reports Server (NTRS)

    Ghosal, Sandip

    1993-01-01

    Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy

  13. Unsteady adjoint for large eddy simulation of a coupled turbine stator-rotor system

    NASA Astrophysics Data System (ADS)

    Talnikar, Chaitanya; Wang, Qiqi; Laskowski, Gregory

    2016-11-01

    Unsteady fluid flow simulations like large eddy simulation are crucial in capturing key physics in turbomachinery applications like separation and wake formation in flow over a turbine vane with a downstream blade. To determine how sensitive the design objectives of the coupled system are to control parameters, an unsteady adjoint is needed. It enables the computation of the gradient of an objective with respect to a large number of inputs in a computationally efficient manner. In this paper we present unsteady adjoint solutions for a coupled turbine stator-rotor system. As the transonic fluid flows over the stator vane, the boundary layer transitions to turbulence. The turbulent wake then impinges on the rotor blades, causing early separation. This coupled system exhibits chaotic dynamics which causes conventional adjoint solutions to diverge exponentially, resulting in the corruption of the sensitivities obtained from the adjoint solutions for long-time simulations. In this presentation, adjoint solutions for aerothermal objectives are obtained through a localized adjoint viscosity injection method which aims to stabilize the adjoint solution and maintain accurate sensitivities. Preliminary results obtained from the supercomputer Mira will be shown in the presentation.

  14. Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device

    NASA Astrophysics Data System (ADS)

    Färber, Matthias; Heller, Julika; Handels, Heinz

    2007-03-01

    The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.

  15. A finite volume solver for three dimensional debris flow simulations based on a single calibration parameter

    NASA Astrophysics Data System (ADS)

    von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian; Rickenmann, Dieter

    2016-04-01

    Debris flows are frequent natural hazards that cause massive damage. A wide range of debris flow models try to cover the complex flow behavior that arises from the inhomogeneous material mixture of water with clay, silt, sand, and gravel. The energy dissipation between moving grains depends on grain collisions and tangential friction, and the viscosity of the interstitial fine material suspension depends on the shear gradient. Thus a rheology description needs to be sensitive to the local pressure and shear rate, making the three-dimensional flow structure a key issue for flows in complex terrain. Furthermore, the momentum exchange between the granular and fluid phases should account for the presence of larger particles. We model the fine material suspension with a Herschel-Bulkley rheology law, and represent the gravel with the Coulomb-viscoplastic rheology of Domnik & Pudasaini (Domnik et al. 2013). Both composites are described by two phases that can mix; a third phase accounting for the air is kept separate to account for the free surface. The fluid dynamics are solved in three dimensions using the finite volume open-source code OpenFOAM. Computational costs are kept reasonable by using the Volume of Fluid method to solve only one phase-averaged system of Navier-Stokes equations. The Herschel-Bulkley parameters are modeled as a function of water content, volumetric solid concentration of the mixture, clay content and its mineral composition (Coussot et al. 1989, Yu et al. 2013). The gravel phase properties needed for the Coulomb-viscoplastic rheology are defined by the angle of repose of the gravel. In addition to this basic setup, larger grains and the corresponding grain collisions can be introduced by a coupled Lagrangian particle simulation. Based on the local Savage number a diffusive term in the gravel phase can activate phase separation. The resulting model can reproduce the sensitivity of the debris flow to water content and channel bed roughness, as

  16. Crystal plasticity simulation of Zirconium tube rolling using multi-grain representative volume element

    NASA Astrophysics Data System (ADS)

    Isaenkova, Margarita; Perlovich, Yuriy; Zhuk, Dmitry; Krymskaya, Olga

    2017-10-01

    The rolling of Zirconium tube is studied by means of the crystal plasticity viscoplastic self-consistent (VPSC) constitutive modeling. This modeling performed by a dislocation-based constitutive model and a spectral solver using open-source simulation of DAMASK kit. The multi-grain representative volume elements with periodic boundary conditions are used to predict the texture evolution and distributions of strain and stresses. Two models for randomly textured and partially rolled material are deformed to 30% reduction in tube wall thickness and 7% reduction in tube diameter. The resulting shapes of the models are shown and distributions of strain are plotted. Also, evolution of grain's shape during deformation is shown.

  17. RTC simulations on large branched sewer systems with SmaRTControl.

    PubMed

    de Korte, Kees; van Beest, Dick; van der Plaat, Marcel; de Graaf, Erno; Schaart, Niels

    2009-01-01

    In The Netherlands many large branched sewer systems exist. RTC can improve the performance of these systems. The objective of the universal algorithm of SmaRTControl is to improve the performance of the sewer system and the WWTP. The effect of RTC under rain weather flow conditions is simulated using a hydrological model with 19 drainage districts. The system related inefficiency coefficient (SIC) is introduced for assessment of the performance of sewer systems. The performance can be improved by RTC in combination with increased pumping capacities in the drainage districts, but without increasing the flow to the WWTP. Under dry weather flow conditions the flow to the WWTP can be equalized by storage of wastewater in the sewer system. It is concluded that SmaRTControl can improve the performance, that simulations are necessary and that SIC is an excellent parameter for assessment of the performance.

  18. Multilayer integral method for simulation of eddy currents in thin volumes of arbitrary geometry produced by MRI gradient coils.

    PubMed

    Sanchez Lopez, Hector; Freschi, Fabio; Trakic, Adnan; Smith, Elliot; Herbert, Jeremy; Fuentes, Miguel; Wilson, Stephen; Liu, Limei; Repetto, Maurizio; Crozier, Stuart

    2014-05-01

    This article aims to present a fast, efficient and accurate multi-layer integral method (MIM) for the evaluation of complex spatiotemporal eddy currents in nonmagnetic and thin volumes of irregular geometries induced by arbitrary arrangements of gradient coils. The volume of interest is divided into a number of layers, wherein the thickness of each layer is assumed to be smaller than the skin depth and where one of the linear dimensions is much smaller than the remaining two dimensions. The diffusion equation of the current density is solved both in time-harmonic and transient domain. The experimentally measured magnetic fields produced by the coil and the induced eddy currents as well as the corresponding time-decay constants were in close agreement with the results produced by the MIM. Relevant parameters such as power loss and force induced by the eddy currents in a split cryostat were simulated using the MIM. The proposed method is capable of accurately simulating the current diffusion process inside thin volumes, such as the magnet cryostat. The method permits the priori-calculation of optimal pre-emphasis parameters. The MIM enables unified designs of gradient coil-magnet structures for an optimal mitigation of deleterious eddy current effects. Copyright © 2013 Wiley Periodicals, Inc.

  19. Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent Reactive Mixtures

    DTIC Science & Technology

    2012-03-27

    pulse- detonation engines ( PDE ), stage separation, supersonic cav- ity oscillations, hypersonic aerodynamics, detonation induced structural...ADAPTIVE UNSTRUCTURED CARTESIAN METHOD FOR LARGE-EDDY SIMULATION OF DETONATION IN MULTI-PHASE TURBULENT REACTIVE MIXTURES 5b. GRANT NUMBER FA9550...CCL Report TR-2012-03-03 Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent

  20. Wind Energy-Related Atmospheric Boundary Layer Large-Eddy Simulation Using OpenFOAM: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M.J.; Vijayakumar, G.; Brasseur, J.G.

    This paper develops and evaluates the performance of a large-eddy simulation (LES) solver in computing the atmospheric boundary layer (ABL) over flat terrain under a variety of stability conditions, ranging from shear driven (neutral stratification) to moderately convective (unstable stratification).