Sample records for large volume simulations

  1. Simulation of hydrodynamics using large eddy simulation-second-order moment model in circulating fluidized beds

    NASA Astrophysics Data System (ADS)

    Juhui, Chen; Yanjia, Tang; Dan, Li; Pengfei, Xu; Huilin, Lu

    2013-07-01

    Flow behavior of gas and particles is predicted by the large eddy simulation of gas-second order moment of solid model (LES-SOM model) in the simulation of flow behavior in CFB. This study shows that the simulated solid volume fractions along height using a two-dimensional model are in agreement with experiments. The velocity, volume fraction and second-order moments of particles are computed. The second-order moments of clusters are calculated. The solid volume fraction, velocity and second order moments are compared at the three different model constants.

  2. Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, A. N., E-mail: tgtu-kafedra-ese@mail.ru

    2015-09-15

    Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.

  3. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nagata, K.

    2016-08-01

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.

  4. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less

  5. Detecting an Extended Light Source through a Lens

    ERIC Educational Resources Information Center

    Litaker, E. T.; Machacek, J. R.; Gay, T. J.

    2011-01-01

    We present a Monte Carlo simulation of a cylindrical luminescent volume and a typical lens-detector system. The results of this simulation yield a graphically simple picture of the regions within the cylindrical volume from which this system detects light. Because the cylindrical volume permits large angles of incidence, we use a modification of…

  6. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  7. The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris

    2015-08-01

    Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less

  8. Manufacture and Testing of an Activation Foil Package for Use in AFIDS

    DTIC Science & Technology

    2005-03-01

    Miller. Nuclides and Isotopes , 16th ed. Lockheed Martin, 2002. 4. Broadhead, Bryan. Sr. Development Staff, Reactor and Fuel Cycle Analysis ...alternative, the concept of using liquid nitrous oxide inside a reactor to simulate large volumes of air was investigated. Simulation using the...weapon. We analyzed whether N2O could replicate large volumes of air in neutron transport experiments since one cubic centimeter of liquid N2O

  9. Physiologic mechanisms of circulatory and body fluid losses in weightlessness identified by mathematical modeling

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R. S.; Charles, J. B.

    1993-01-01

    Central volume expansion due to fluid shifts in weightlessness is believed to activate adaptive reflexes which ultimately result in a reduction of the total circulating blood volume. However, the flight data suggests that a central volume overdistention does not persist, in which case some other factor or factors must be responsible for body fluid losses. We used a computer simulation to test the hypothesis that factors other than central volume overdistention are involved in the loss of blood volume and other body fluid volumes observed in weightlessness and in weightless simulations. Additionally, the simulation was used to identify these factors. The results predict that atrial volumes and pressures return to their prebedrest baseline values within the first day of exposure to head down tilt (HDT) as the blood volume is reduced by an elevated urine formation. They indicate that the mechanisms for large and prolonged body fluid losses in weightlessness is red cell hemoconcentration that elevates blood viscosity and peripheral resistance, thereby lowering capillary pressure. This causes a prolonged alteration of the balance of Starling forces, depressing the extracellular fluid volume until the hematocrit is returned to normal through a reduction of the red cell mass, which also allows some restoration of the plasma volume. We conclude that the red cell mass becomes the physiologic driver for a large 'undershoot' of the body fluid volumes after the normalization of atrial volumes and pressures.

  10. Numerical simulation of seismic wave propagation from land-excited large volume air-gun source

    NASA Astrophysics Data System (ADS)

    Cao, W.; Zhang, W.

    2017-12-01

    The land-excited large volume air-gun source can be used to study regional underground structures and to detect temporal velocity changes. The air-gun source is characterized by rich low frequency energy (from bubble oscillation, 2-8Hz) and high repeatability. It can be excited in rivers, reservoirs or man-made pool. Numerical simulation of the seismic wave propagation from the air-gun source helps to understand the energy partitioning and characteristics of the waveform records at stations. However, the effective energy recorded at a distance station is from the process of bubble oscillation, which can not be approximated by a single point source. We propose a method to simulate the seismic wave propagation from the land-excited large volume air-gun source by finite difference method. The process can be divided into three parts: bubble oscillation and source coupling, solid-fluid coupling and the propagation in the solid medium. For the first part, the wavelet of the bubble oscillation can be simulated by bubble model. We use wave injection method combining the bubble wavelet with elastic wave equation to achieve the source coupling. Then, the solid-fluid boundary condition is implemented along the water bottom. And the last part is the seismic wave propagation in the solid medium, which can be readily implemented by the finite difference method. Our method can get accuracy waveform of land-excited large volume air-gun source. Based on the above forward modeling technology, we analysis the effect of the excited P wave and the energy of converted S wave due to different water shapes. We study two land-excited large volume air-gun fields, one is Binchuan in Yunnan, and the other is Hutubi in Xinjiang. The station in Binchuan, Yunnan is located in a large irregular reservoir, the waveform records have a clear S wave. Nevertheless, the station in Hutubi, Xinjiang is located in a small man-made pool, the waveform records have very weak S wave. Better understanding of the characteristics of land-excited large volume air-gun can help to better use of the air-gun source.

  11. The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data

    NASA Technical Reports Server (NTRS)

    Hanke, C. R.; Nordwall, D. R.

    1970-01-01

    The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.

  12. Computational study of noise in a large signal transduction network.

    PubMed

    Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena

    2011-06-21

    Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.

  13. Modeling the Lyα Forest in Collisionless Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, Daniele; Oñorbe, José; Lukić, Zarija

    2016-08-11

    Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present in this paper "Iteratively Matched Statistics" (IMS), a novel method to accurately model the Lyα forest with collisionless N-body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) andmore » the power spectrum of the real-space Lyα forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N-body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Lyα forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N-body simulations with achievable mean inter-particle separations in large-volume simulations. Finally, in addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic "mock" skies for Lyα forest surveys.« less

  14. MODELING THE Ly α FOREST IN COLLISIONLESS SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, Daniele; Oñorbe, José; Hennawi, Joseph F.

    2016-08-20

    Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present “Iteratively Matched Statistics” (IMS), a novel method to accurately model the Ly α forest with collisionless N -body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) and themore » power spectrum of the real-space Ly α forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N -body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Ly α forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N -body simulations with achievable mean inter-particle separations in large-volume simulations. In addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic “mock” skies for Ly α forest surveys.« less

  15. Computer simulation of preflight blood volume reduction as a countermeasure to fluid shifts in space flight

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R.; Charles, J. B.

    1992-01-01

    Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.

  16. Recent progress in simulating galaxy formation from the largest to the smallest scales

    NASA Astrophysics Data System (ADS)

    Faucher-Giguère, Claude-André

    2018-05-01

    Galaxy formation simulations are an essential part of the modern toolkit of astrophysicists and cosmologists alike. Astrophysicists use the simulations to study the emergence of galaxy populations from the Big Bang, as well as the formation of stars and supermassive black holes. For cosmologists, galaxy formation simulations are needed to understand how baryonic processes affect measurements of dark matter and dark energy. Owing to the extreme dynamic range of galaxy formation, advances are driven by novel approaches using simulations with different tradeoffs between volume and resolution. Large-volume but low-resolution simulations provide the best statistics, while higher-resolution simulations of smaller cosmic volumes can be evolved with self-consistent physics and reveal important emergent phenomena. I summarize recent progress in galaxy formation simulations, including major developments in the past five years, and highlight some key areas likely to drive further advances over the next decade.

  17. A Mixed Finite Volume Element Method for Flow Calculations in Porous Media

    NASA Technical Reports Server (NTRS)

    Jones, Jim E.

    1996-01-01

    A key ingredient in the simulation of flow in porous media is the accurate determination of the velocities that drive the flow. The large scale irregularities of the geology, such as faults, fractures, and layers suggest the use of irregular grids in the simulation. Work has been done in applying the finite volume element (FVE) methodology as developed by McCormick in conjunction with mixed methods which were developed by Raviart and Thomas. The resulting mixed finite volume element discretization scheme has the potential to generate more accurate solutions than standard approaches. The focus of this paper is on a multilevel algorithm for solving the discrete mixed FVE equations. The algorithm uses a standard cell centered finite difference scheme as the 'coarse' level and the more accurate mixed FVE scheme as the 'fine' level. The algorithm appears to have potential as a fast solver for large size simulations of flow in porous media.

  18. Optimizing for Large Planar Fractures in Multistage Horizontal Wells in Enhanced Geothermal Systems Using a Coupled Fluid and Geomechanics Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred

    Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less

  19. The Monte Carlo simulation of the Borexino detector

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Canepa, M.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Magnozzi, M.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.

    2018-01-01

    We describe the Monte Carlo (MC) simulation of the Borexino detector and the agreement of its output with data. The Borexino MC "ab initio" simulates the energy loss of particles in all detector components and generates the resulting scintillation photons and their propagation within the liquid scintillator volume. The simulation accounts for absorption, reemission, and scattering of the optical photons and tracks them until they either are absorbed or reach the photocathode of one of the photomultiplier tubes. Photon detection is followed by a comprehensive simulation of the readout electronics response. The MC is tuned using data collected with radioactive calibration sources deployed inside and around the scintillator volume. The simulation reproduces the energy response of the detector, its uniformity within the fiducial scintillator volume relevant to neutrino physics, and the time distribution of detected photons to better than 1% between 100 keV and several MeV. The techniques developed to simulate the Borexino detector and their level of refinement are of possible interest to the neutrino community, especially for current and future large-volume liquid scintillator experiments such as Kamland-Zen, SNO+, and Juno.

  20. Feasibility of large volume tumor ablation using multiple-mode strategy with fast scanning method: A numerical study

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Shen, Guofeng; Qiao, Shan; Chen, Yazhu

    2017-03-01

    Sonication with fast scanning method can generate homogeneous lesions without complex planning. But when the target region is large, switching focus too fast will reduce the heat accumulation, the margin of which may not ablated. Furthermore, high blood perfusion rate will reduce this maximum volume that can be ablated. Therefore, fast scanning method may not be applied to large volume tumor. To expand the therapy scope, this study combines the fast scan method with multiple mode strategy. Through simulation and experiment, the feasibility of this new strategy is evaluated and analyzed.

  1. Can Atmospheric Reanalysis Data Sets Be Used to Reproduce Flooding Over Large Scales?

    NASA Astrophysics Data System (ADS)

    Andreadis, Konstantinos M.; Schumann, Guy J.-P.; Stampoulis, Dimitrios; Bates, Paul D.; Brakenridge, G. Robert; Kettner, Albert J.

    2017-10-01

    Floods are costly to global economies and can be exceptionally lethal. The ability to produce consistent flood hazard maps over large areas could provide a significant contribution to reducing such losses, as the lack of knowledge concerning flood risk is a major factor in the transformation of river floods into flood disasters. In order to accurately reproduce flooding in river channels and floodplains, high spatial resolution hydrodynamic models are needed. Despite being computationally expensive, recent advances have made their continental to global implementation feasible, although inputs for long-term simulations may require the use of reanalysis meteorological products especially in data-poor regions. We employ a coupled hydrologic/hydrodynamic model cascade forced by the 20CRv2 reanalysis data set and evaluate its ability to reproduce flood inundation area and volume for Australia during the 1973-2012 period. Ensemble simulations using the reanalysis data were performed to account for uncertainty in the meteorology and compared with a validated benchmark simulation. Results show that the reanalysis ensemble capture the inundated areas and volumes relatively well, with correlations for the ensemble mean of 0.82 and 0.85 for area and volume, respectively, although the meteorological ensemble spread propagates in large uncertainty of the simulated flood characteristics.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaurov, Alexander A., E-mail: kaurov@uchicago.edu

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less

  3. Shock Interaction with Random Spherical Particle Beds

    NASA Astrophysics Data System (ADS)

    Neal, Chris; Mehta, Yash; Salari, Kambiz; Jackson, Thomas L.; Balachandar, S. "Bala"; Thakur, Siddharth

    2016-11-01

    In this talk we present results on fully resolved simulations of shock interaction with randomly distributed bed of particles. Multiple simulations were carried out by varying the number of particles to isolate the effect of volume fraction. Major focus of these simulations was to understand 1) the effect of the shockwave and volume fraction on the forces experienced by the particles, 2) the effect of particles on the shock wave, and 3) fluid mediated particle-particle interactions. Peak drag force for particles at different volume fractions show a downward trend as the depth of the bed increased. This can be attributed to dissipation of energy as the shockwave travels through the bed of particles. One of the fascinating observations from these simulations was the fluctuations in different quantities due to presence of multiple particles and their random distribution. These are large simulations with hundreds of particles resulting in large amount of data. We present statistical analysis of the data and make relevant observations. Average pressure in the computational domain is computed to characterize the strengths of the reflected and transmitted waves. We also present flow field contour plots to support our observations. U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  4. Parsing partial molar volumes of small molecules: a molecular dynamics study.

    PubMed

    Patel, Nisha; Dubins, David N; Pomès, Régis; Chalikian, Tigran V

    2011-04-28

    We used molecular dynamics (MD) simulations in conjunction with the Kirkwood-Buff theory to compute the partial molar volumes for a number of small solutes of various chemical natures. We repeated our computations using modified pair potentials, first, in the absence of the Coulombic term and, second, in the absence of the Coulombic and the attractive Lennard-Jones terms. Comparison of our results with experimental data and the volumetric results of Monte Carlo simulation with hard sphere potentials and scaled particle theory-based computations led us to conclude that, for small solutes, the partial molar volume computed with the Lennard-Jones potential in the absence of the Coulombic term nearly coincides with the cavity volume. On the other hand, MD simulations carried out with the pair interaction potentials containing only the repulsive Lennard-Jones term produce unrealistically large partial molar volumes of solutes that are close to their excluded volumes. Our simulation results are in good agreement with the reported schemes for parsing partial molar volume data on small solutes. In particular, our determined interaction volumes() and the thickness of the thermal volume for individual compounds are in good agreement with empirical estimates. This work is the first computational study that supports and lends credence to the practical algorithms of parsing partial molar volume data that are currently in use for molecular interpretations of volumetric data.

  5. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  6. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks.

    PubMed

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-11-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors' best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.

  7. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks

    PubMed Central

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-01-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well. PMID:27809285

  8. Studying Turbulence Using Numerical Simulation Databases - X Proceedings of the 2004 Summer Program

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Mansour, Nagi N.

    2004-01-01

    This Proceedings volume contains 32 papers that span a wide range of topics that reflect the ubiquity of turbulence. The papers have been divided into six groups: 1) Solar Simulations; 2) Magnetohydrodynamics (MHD); 3) Large Eddy Simulation (LES) and Numerical Simulations; 4) Reynolds Averaged Navier Stokes (RANS) Modeling and Simulations; 5) Stability and Acoustics; 6) Combustion and Multi-Phase Flow.

  9. An engineering closure for heavily under-resolved coarse-grid CFD in large applications

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Yu, Fujiang; Jordan, Thomas

    2016-11-01

    Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.

  10. Bladder filling variation during conformal radiotherapy for rectal cancer

    NASA Astrophysics Data System (ADS)

    Sithamparam, S.; Ahmad, R.; Sabarudin, A.; Othman, Z.; Ismail, M.

    2017-05-01

    Conformal radiotherapy for rectal cancer is associated with small bowel toxicity mainly diarrhea. Treating patients with a full bladder is one of the practical solutions to reduce small bowel toxicity. Previous studies on prostate and cervix cancer patients revealed that maintaining consistent bladder volume throughout radiotherapy treatment is challenging. The aim of this study was to measure bladder volume variation throughout radiotherapy treatment. This study also measured the association between bladder volume changes and diarrhea. Twenty two rectal cancer patients were recruited prospectively. Patients were planned for treatment with full bladder following departmental bladder filling protocol and the planning bladder volume was measured during CT-simulation. During radiotherapy, the bladder volume was measured weekly using cone-beam computed tomography (CBCT) and compared to planning bladder volume. Incidence and severity of diarrhea were recorded during the weekly patient review. There was a negative time trend for bladder volume throughout five weeks treatment. The mean bladder volume decreased 18 % from 123 mL (SD 54 mL) during CT-simulation to 101 mL (SD 71 mL) on the 5th week of radiotherapy, but the decrease is not statistically significant. However, there was a large variation of bladder volume within each patient during treatment. This study showed an association between changes of bladder volume and diarrhea (P = 0.045). In conclusion bladder volume reduced throughout radiotherapy treatment for conformal radiotherapy for rectal cancer and there was a large variation of bladder volume within patients.

  11. Replicable Interprofessional Competency Outcomes from High-Volume, Inter-Institutional, Interprofessional Simulation

    PubMed Central

    Bambini, Deborah; Emery, Matthew; de Voest, Margaret; Meny, Lisa; Shoemaker, Michael J.

    2016-01-01

    There are significant limitations among the few prior studies that have examined the development and implementation of interprofessional education (IPE) experiences to accommodate a high volume of students from several disciplines and from different institutions. The present study addressed these gaps by seeking to determine the extent to which a single, large, inter-institutional, and IPE simulation event improves student perceptions of the importance and relevance of IPE and simulation as a learning modality, whether there is a difference in students’ perceptions among disciplines, and whether the results are reproducible. A total of 290 medical, nursing, pharmacy, and physical therapy students participated in one of two large, inter-institutional, IPE simulation events. Measurements included student perceptions about their simulation experience using the Attitude Towards Teamwork in Training Undergoing Designed Educational Simulation (ATTITUDES) Questionnaire and open-ended questions related to teamwork and communication. Results demonstrated a statistically significant improvement across all ATTITUDES subscales, while time management, role confusion, collaboration, and mutual support emerged as significant themes. Results of the present study indicate that a single IPE simulation event can reproducibly result in significant and educationally meaningful improvements in student perceptions towards teamwork, IPE, and simulation as a learning modality. PMID:28970407

  12. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE Data Explorer

    Ebrahimi, Fatima [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000331095367); Raman, Roger [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000220273271)

    2016-01-01

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet–Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. These results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  13. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE Data Explorer

    Ebrahimi, F. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Raman, R. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-04-01

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet–Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. These results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  14. Electronic Business Transaction Infrastructure Analysis Using Petri Nets and Simulation

    ERIC Educational Resources Information Center

    Feller, Andrew Lee

    2010-01-01

    Rapid growth in eBusiness has made industry and commerce increasingly dependent on the hardware and software infrastructure that enables high-volume transaction processing across the Internet. Large transaction volumes at major industrial-firm data centers rely on robust transaction protocols and adequately provisioned hardware capacity to ensure…

  15. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  16. The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation

    NASA Astrophysics Data System (ADS)

    Noh, Yookyung

    The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.

  17. Physiologic volume of phosphorus during hemodialysis: predictions from a pseudo one-compartment model.

    PubMed

    Leypoldt, John K; Akonur, Alp; Agar, Baris U; Culleton, Bruce F

    2012-10-01

    The kinetics of plasma phosphorus concentrations during hemodialysis (HD) are complex and cannot be described by conventional one- or two-compartment kinetic models. It has recently been shown by others that the physiologic (or apparent distribution) volume for phosphorus (Vr-P) increases with increasing treatment time and shows a large variation among patients treated by thrice weekly and daily HD. Here, we describe the dependence of Vr-P on treatment time and predialysis plasma phosphorus concentration as predicted by a novel pseudo one-compartment model. The kinetics of plasma phosphorus during conventional and six times per week daily HD were simulated as a function of treatment time per session for various dialyzer phosphate clearances and patient-specific phosphorus mobilization clearances (K(M)). Vr-P normalized to extracellular volume from these simulations were reported and compared with previously published empirical findings. Simulated results were relatively independent of dialyzer phosphate clearance and treatment frequency. In contrast, Vr-P was strongly dependent on treatment time per session; the increase in Vr-P with treatment time was larger for higher values of K(M). Vr-P was inversely dependent on predialysis plasma phosphorus concentration. There was significant variation among predicted Vr-P values, depending largely on the value of K(M). We conclude that a pseudo one-compartment model can describe the empirical dependence of the physiologic volume of phosphorus on treatment time and predialysis plasma phosphorus concentration. Further, the variation in physiologic volume of phosphorus among HD patients is largely due to differences in patient-specific phosphorus mobilization clearance. © 2012 The Authors. Hemodialysis International © 2012 International Society for Hemodialysis.

  18. Naked-eye 3D imaging employing a modified MIMO micro-ring conjugate mirrors

    NASA Astrophysics Data System (ADS)

    Youplao, P.; Pornsuwancharoen, N.; Amiri, I. S.; Thieu, V. N.; Yupapin, P.

    2018-03-01

    In this work, the use of a micro-conjugate mirror that can produce the 3D image incident probe and display is proposed. By using the proposed system together with the concept of naked-eye 3D imaging, a pixel and a large volume pixel of a 3D image can be created and displayed as naked-eye perception, which is valuable for the large volume naked-eye 3D imaging applications. In operation, a naked-eye 3D image that has a large pixel volume will be constructed by using the MIMO micro-ring conjugate mirror system. Thereafter, these 3D images, formed by the first micro-ring conjugate mirror system, can be transmitted through an optical link to a short distance away and reconstructed via the recovery conjugate mirror at the other end of the transmission. The image transmission is performed by the Fourier integral in MATLAB and compares to the Opti-wave program results. The Fourier convolution is also included for the large volume image transmission. The simulation is used for the manipulation, where the array of a micro-conjugate mirror system is designed and simulated for the MIMO system. The naked-eye 3D imaging is confirmed by the concept of the conjugate mirror in both the input and output images, in terms of the four-wave mixing (FWM), which is discussed and interpreted.

  19. Forming Disk Galaxies Early in the Universe

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2015-08-01

    What were galaxies like in the first 500 million years of the universe? According to simulations by Yu Feng (UC Berkeley) and collaborators, the earliest massive galaxies to form were mostly disk-shaped, rather than the compact clumps previously predicted. Early-Galaxy Models. Current models for galaxy formation predict that small perturbations in the distribution of matter in the early universe collapsed to form very compact, irregular, clumpy first galaxies. Observations support this: the furthest out that we've spotted disk-shaped galaxies is at z=3, whereas the galaxies we've observed from earlier times -- up to redshifts of z=8-10 -- are very compact. But could this be a selection effect, arising from the rarity of large galaxies in the early universe? Current surveys at high redshift have thus far only covered relatively small volumes of space, so it's not necessarily surprising that we haven't yet spotted any large disk galaxies. Similarly, numerical simulations of galaxy formation are limited in the size of the volume they can evolve, so resulting models of early galaxy formation also tend to favor compact clumpy galaxies over large disks. An Enormous Simulation. Pushing at these limitations, Feng and his collaborators used the Blue Waters supercomputer to carry out an enormous cosmological hydrodynamic simulation called BlueTides. In this simulation, they track 700 billion particles as they evolve in a volume of 400 comoving Mpc/h -- 40 times the volume of the largest previous simulation and 300 times the volume of the largest observational survey at these redshifts. What they find is that by z=8, a whopping 70% of the most massive galaxies (over 7 billion solar masses each) were disk-shaped, though they are more compact, gas-rich, and turbulent than present-day disk galaxies like the Milky Way. The way the most massive galaxies formed in the simulation also wasn't expected: rather than resulting from major mergers, they were built from smooth accretion onto the disks from nearby filaments. These simulations suggest we still have a lot to learn about the structure of galaxies in the early universe and how they formed. Luckily, future telescope projects should help us out: Feng and collaborators estimate that the WFIRST satellite, for instance, should have the capability to detect 8000 disk galaxies of the type BlueTides predicts -- compared to the weak 30% chance of finding a single one in the current largest-area Hubble survey!

  20. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  1. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  2. Changes in Seasonal and Extreme Hydrologic Conditions of the Georgia Basin/Puget Sound in an Ensemble Regional Climate Simulation for the Mid-Century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung, Lai R.; Qian, Yun

    This study examines an ensemble of climate change projections simulated by a global climate model (GCM) and downscaled with a region climate model (RCM) to 40 km spatial resolution for the western North America. One control and three ensemble future climate simulations were produced by the GCM following a business as usual scenario for greenhouse gases and aerosols emissions from 1995 to 2100. The RCM was used to downscale the GCM control simulation (1995-2015) and each ensemble future GCM climate (2040-2060) simulation. Analyses of the regional climate simulations for the Georgia Basin/Puget Sound showed a warming of 1.5-2oC and statisticallymore » insignificant changes in precipitation by the mid-century. Climate change has large impacts on snowpack (about 50% reduction) but relatively smaller impacts on the total runoff for the basin as a whole. However, climate change can strongly affect small watersheds such as those located in the transient snow zone, causing a higher likelihood of winter flooding as a higher percentage of precipitation falls in the form of rain rather than snow, and reduced streamflow in early summer. In addition, there are large changes in the monthly total runoff above the upper 1% threshold (or flood volume) from October through May, and the December flood volume of the future climate is 60% above the maximum monthly flood volume of the control climate. Uncertainty of the climate change projections, as characterized by the spread among the ensemble future climate simulations, is relatively small for the basin mean snowpack and runoff, but increases in smaller watersheds, especially in the transient snow zone, and associated with extreme events. This emphasizes the importance of characterizing uncertainty through ensemble simulations.« less

  3. Correlating Free-Volume Hole Distribution to the Glass Transition Temperature of Epoxy Polymers.

    PubMed

    Aramoon, Amin; Breitzman, Timothy D; Woodward, Christopher; El-Awady, Jaafar A

    2017-09-07

    A new algorithm is developed to quantify the free-volume hole distribution and its evolution in coarse-grained molecular dynamics simulations of polymeric networks. This is achieved by analyzing the geometry of the network rather than a voxelized image of the structure to accurately and efficiently find and quantify free-volume hole distributions within large scale simulations of polymer networks. The free-volume holes are quantified by fitting the largest ellipsoids and spheres in the free-volumes between polymer chains. The free-volume hole distributions calculated from this algorithm are shown to be in excellent agreement with those measured from positron annihilation lifetime spectroscopy (PALS) experiments at different temperature and pressures. Based on the results predicted using this algorithm, an evolution model is proposed for the thermal behavior of an individual free-volume hole. This model is calibrated such that the average radius of free-volumes holes mimics the one predicted from the simulations. The model is then employed to predict the glass-transition temperature of epoxy polymers with different degrees of cross-linking and lengths of prepolymers. Comparison between the predicted glass-transition temperatures and those measured from simulations or experiments implies that this model is capable of successfully predicting the glass-transition temperature of the material using only a PDF of the initial free-volume holes radii of each microstructure. This provides an effective approach for the optimized design of polymeric systems on the basis of the glass-transition temperature, degree of cross-linking, and average length of prepolymers.

  4. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE PAGES

    Ebrahimi, F.; Raman, R.

    2016-03-23

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form amore » narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. Furthermore, these results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.« less

  5. Strategies for Interactive Visualization of Large Scale Climate Simulations

    NASA Astrophysics Data System (ADS)

    Xie, J.; Chen, C.; Ma, K.; Parvis

    2011-12-01

    With the advances in computational methods and supercomputing technology, climate scientists are able to perform large-scale simulations at unprecedented resolutions. These simulations produce data that are time-varying, multivariate, and volumetric, and the data may contain thousands of time steps with each time step having billions of voxels and each voxel recording dozens of variables. Visualizing such time-varying 3D data to examine correlations between different variables thus becomes a daunting task. We have been developing strategies for interactive visualization and correlation analysis of multivariate data. The primary task is to find connection and correlation among data. Given the many complex interactions among the Earth's oceans, atmosphere, land, ice and biogeochemistry, and the sheer size of observational and climate model data sets, interactive exploration helps identify which processes matter most for a particular climate phenomenon. We may consider time-varying data as a set of samples (e.g., voxels or blocks), each of which is associated with a vector of representative or collective values over time. We refer to such a vector as a temporal curve. Correlation analysis thus operates on temporal curves of data samples. A temporal curve can be treated as a two-dimensional function where the two dimensions are time and data value. It can also be treated as a point in the high-dimensional space. In this case, to facilitate effective analysis, it is often necessary to transform temporal curve data from the original space to a space of lower dimensionality. Clustering and segmentation of temporal curve data in the original or transformed space provides us a way to categorize and visualize data of different patterns, which reveals connection or correlation of data among different variables or at different spatial locations. We have employed the power of GPU to enable interactive correlation visualization for studying the variability and correlations of a single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.

  6. MD modeling of screw dislocation influence upon initiation and mechanism of BCC-HCP polymorphous transition in iron

    NASA Astrophysics Data System (ADS)

    Dremov, V. V.; Ionov, G. V.; Sapozhnikov, F. A.; Smirnov, N. A.; Karavaev, A. V.; Vorobyova, M. A.; Ryzhkov, M. V.

    2015-09-01

    The present work is devoted to classical molecular dynamics investigation into microscopic mechanisms of the bcc-hcp transition in iron. The interatomic potential of EAM type used in the calculations was tested for the capability to reproduce ab initio data on energy evolution along the bcc-hcp transformation path (Burgers deformation + shuffe) and then used in the large-scale MD simulations. The large-scale simulations included constant volume deformation along the Burgers path to study the origin and nature of the plasticity, hydrostatic volume compression of defect free samples above the bcc to hcp transition threshold to observe the formation of new phase embryos, and the volume compression of samples containing screw dislocations to study the effect of the dislocations on the probability of the new phase critical embryo formation. The volume compression demonstrated high level of metastability. The transition starts at pressure much higher than the equilibrium one. Dislocations strongly affect the probability of the critical embryo formation and significantly reduce the onset pressure of transition. The dislocations affect also the resulting structure of the samples upon the transition. The formation of layered structure is typical for the samples containing the dislocations. The results of the simulations were compared with the in-situ experimental data on the mechanism of the bcc-hcp transition in iron.

  7. Infusion System Architecture Impacts the Ability of Intensive Care Nurses to Maintain Hemodynamic Stability in a Living Swine Simulator.

    PubMed

    Pezone, Matthew J; Peterfreund, Robert A; Maslov, Mikhail Y; Govindaswamy, Radhika R; Lovich, Mark A

    2016-05-01

    The authors have previously shown that drug infusion systems with large common volumes exhibit long delays in reaching steady-state drug delivery and pharmacodynamic effects compared with smaller common-volume systems. The authors hypothesized that such delays can impede the pharmacologic restoration of hemodynamic stability. The authors created a living swine simulator of hemodynamic instability in which occlusion balloons in the aorta and inferior vena cava (IVC) were used to manipulate blood pressure. Experienced intensive care unit nurses blinded to the use of small or large common-volume infusion systems were instructed to maintain mean arterial blood pressure between 70 and 90 mmHg using only sodium nitroprusside and norepinephrine infusions. Four conditions (IVC or aortic occlusions and small or large common volume) were tested 12 times in eight animals. After aortic occlusion, the time to restore mean arterial pressure to range (t1: 2.4 ± 1.4 vs. 5.0 ± 2.3 min, P = 0.003, average ± SD), time-out-of-range (tOR: 6.2 ± 3.5 vs. 9.5 ± 3.4 min, P = 0.028), and area-out-of-range (pressure-time integral: 84 ± 47 vs. 170 ± 100 mmHg · min, P = 0.018) were all lower with smaller common volumes. After IVC occlusion, t1 (3.7 ± 2.2 vs. 7.1 ± 2.6 min, P = 0.002), tOR (6.3 ± 3.5 vs. 11 ± 3.0 min, P = 0.007), and area-out-of-range (110 ± 93 vs. 270 ± 140 mmHg · min, P = 0.003) were all lower with smaller common volumes. Common-volume size did not impact the total amount infused of either drug. Nurses did not respond as effectively to hemodynamic instability when drugs flowed through large common-volume infusion systems. These findings suggest that drug infusion system common volume may have clinical impact, should be minimized to the greatest extent possible, and warrants clinical investigations.

  8. Large Eddy Simulation of Air Escape through a Hospital Isolation Room Single Hinged Doorway—Validation by Using Tracer Gases and Simulated Smoke Videos

    PubMed Central

    Saarinen, Pekka E.; Kalliomäki, Petri; Tang, Julian W.; Koskela, Hannu

    2015-01-01

    The use of hospital isolation rooms has increased considerably in recent years due to the worldwide outbreaks of various emerging infectious diseases. However, the passage of staff through isolation room doors is suspected to be a cause of containment failure, especially in case of hinged doors. It is therefore important to minimize inadvertent contaminant airflow leakage across the doorway during such movements. To this end, it is essential to investigate the behavior of such airflows, especially the overall volume of air that can potentially leak across the doorway during door-opening and human passage. Experimental measurements using full-scale mock-ups are expensive and labour intensive. A useful alternative approach is the application of Computational Fluid Dynamics (CFD) modelling using a time-resolved Large Eddy Simulation (LES) method. In this study simulated air flow patterns are qualitatively compared with experimental ones, and the simulated total volume of air that escapes is compared with the experimentally measured volume. It is shown that the LES method is able to reproduce, at room scale, the complex transient airflows generated during door-opening/closing motions and the passage of a human figure through the doorway between two rooms. This was a basic test case that was performed in an isothermal environment without ventilation. However, the advantage of the CFD approach is that the addition of ventilation airflows and a temperature difference between the rooms is, in principle, a relatively simple task. A standard method to observe flow structures is dosing smoke into the flow. In this paper we introduce graphical methods to simulate smoke experiments by LES, making it very easy to compare the CFD simulation to the experiments. The results demonstrate that the transient CFD simulation is a promising tool to compare different isolation room scenarios without the need to construct full-scale experimental models. The CFD model is able to reproduce the complex airflows and estimate the volume of air escaping as a function of time. In this test, the calculated migrated air volume in the CFD model differed by 20% from the experimental tracer gas measurements. In the case containing only a hinged door operation, without passage, the difference was only 10%. PMID:26151865

  9. An open-loop controlled active lung simulator for preterm infants.

    PubMed

    Cecchini, Stefano; Schena, Emiliano; Silvestri, Sergio

    2011-01-01

    We describe the underlying theory, design and experimental evaluation of an electromechanical analogue infant lung to simulate spontaneous breathing patterns of preterm infants. The aim of this work is to test the possibility to obtain breathing patterns of preterm infants by taking into consideration the air compressibility. Respiratory volume function represents the actuation pattern, and pulmonary pressure and flow-rate waveforms are mathematically obtained through the application of the perfect gas and adiabatic laws. The mathematical model reduces the simulation interval into a step shorter than 1 ms, allowing to consider an entire respiratory act as composed of a large number of almost instantaneous adiabatic transformations. The device consists of a spherical chamber where the air is compressed by four cylinder-pistons, moved by stepper motors, and flows through a fluid-dynamic resistance, which also works as flow-rate sensor. Specifically designed software generates the actuators motion, based on the desired ventilation parameters, without controlling the gas pneumatic parameters with a closed-loop. The system is able to simulate tidal volumes from 3 to 8 ml, breathing frequencies from 60 to 120 bpm and functional residual capacities from 25 to 80 ml. The simulated waveforms appear very close to the measured ones. Percentage differences on the tidal volume waveform vary from 7% for the tidal volume of 3 ml, down to 2.2-3.5% for tidal volumes in the range of 4-7 ml, and 1.3% for the tidal volume equal to 8 ml in the whole breathing frequency and functional residual capacity ranges. The open-loop electromechanical simulator shows that gas compressibility can be theoretically assessed in the typical pneumatic variable range of preterm infant respiratory mechanics. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Plasma flame for mass purification of contaminated air with chemical and biological warfare agents

    NASA Astrophysics Data System (ADS)

    Uhm, Han S.; Shin, Dong H.; Hong, Yong C.

    2006-09-01

    An elimination of airborne simulated chemical and biological warfare agents was carried out by making use of a plasma flame made of atmospheric plasma and a fuel-burning flame, which can purify the interior air of a large volume in isolated spaces such as buildings, public transportation systems, and military vehicles. The plasma flame generator consists of a microwave plasma torch connected in series to a fuel injector and a reaction chamber. For example, a reaction chamber, with the dimensions of a 22cm diameter and 30cm length, purifies an airflow rate of 5000lpm contaminated with toluene (the simulated chemical agent) and soot from a diesel engine (the simulated aerosol for biological agents). Large volumes of purification by the plasma flame will free mankind from the threat of airborne warfare agents. The plasma flame may also effectively purify air that is contaminated with volatile organic compounds, in addition to eliminating soot from diesel engines as an environmental application.

  11. Response function and linearity for high energy γ-rays in large volume LaBr3:Ce detectors

    NASA Astrophysics Data System (ADS)

    Gosta, G.; Blasi, N.; Camera, F.; Million, B.; Giaz, A.; Wieland, O.; Rossi, F. M.; Utsunomiya, H.; Ari-izumi, T.; Takenaka, D.; Filipescu, D.; Gheorghe, I.

    2018-01-01

    The response function to high energy γ-rays of two large volume LaBr3:Ce crystals (3.5"x8") and the linearity of the coupled PMT's were investigated at the NewSUBARU facility, where γ-rays in the energy range 6-38 MeV were produced and sent into the detectors. Monte Carlo simulations were performed to reproduce the experimental spectra. The photopeak and interaction efficiencies were also evaluated both in case of a collimated beam and an isotropic source.

  12. Modelling the impact of retention-detention units on sewer surcharge and peak and annual runoff reduction.

    PubMed

    Locatelli, Luca; Gabriel, Søren; Mark, Ole; Mikkelsen, Peter Steen; Arnbjerg-Nielsen, Karsten; Taylor, Heidi; Bockhorn, Britta; Larsen, Hauge; Kjølby, Morten Just; Blicher, Anne Steensen; Binning, Philip John

    2015-01-01

    Stormwater management using water sensitive urban design is expected to be part of future drainage systems. This paper aims to model the combination of local retention units, such as soakaways, with subsurface detention units. Soakaways are employed to reduce (by storage and infiltration) peak and volume stormwater runoff; however, large retention volumes are required for a significant peak reduction. Peak runoff can therefore be handled by combining detention units with soakaways. This paper models the impact of retrofitting retention-detention units for an existing urbanized catchment in Denmark. The impact of retrofitting a retention-detention unit of 3.3 m³/100 m² (volume/impervious area) was simulated for a small catchment in Copenhagen using MIKE URBAN. The retention-detention unit was shown to prevent flooding from the sewer for a 10-year rainfall event. Statistical analysis of continuous simulations covering 22 years showed that annual stormwater runoff was reduced by 68-87%, and that the retention volume was on average 53% full at the beginning of rain events. The effect of different retention-detention volume combinations was simulated, and results showed that allocating 20-40% of a soakaway volume to detention would significantly increase peak runoff reduction with a small reduction in the annual runoff.

  13. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    NASA Astrophysics Data System (ADS)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  14. Effects of cross-linking on partitioning of nanoparticles into a polymer brush: Coarse-grained simulations test simple approximate theories

    NASA Astrophysics Data System (ADS)

    Ozmaian, Masoumeh; Jasnow, David; Eskandari Nasrabad, Afshin; Zilman, Anton; Coalson, Rob D.

    2018-01-01

    The effect of cohesive contacts or, equivalently, dynamical cross-linking on the equilibrium morphology of a polymer brush infiltrated by nanoparticles that are attracted to the polymer strands is studied for plane-grafted brushes using coarse-grained molecular dynamics and approximate statistical mechanical models. In particular, the Alexander-de Gennes (AdG) and Strong Stretching Theory (SST) mean-field theory (MFT) models are considered. It is found that for values of the MFT cross-link strength interaction parameter beyond a certain threshold, both AdG and SST models predict that the polymer brush will be in a compact state of nearly uniform density packed next to the grafting surface over a wide range of solution phase nanoparticle concentrations. Coarse grained molecular dynamics simulations confirm this prediction, for both small nanoparticles (nanoparticle volume = monomer volume) and large nanoparticles (nanoparticle volume = 27 × monomer volume). Simulation results for these cross-linked systems are compared with analogous results for systems with no cross-linking. At the same solution phase nanoparticle concentration, strong cross-linking results in additional compression of the brush relative to the non-crosslinked analog and, at all but the lowest concentrations, to a lesser degree of infiltration by nanoparticles. For large nanoparticles, the monomer density profiles show clear oscillations moving outwards from the grafting surface, corresponding to a degree of layering of the absorbed nanoparticles in the brush as they pack against the grafting surface.

  15. Melt Electrospinning Writing of Highly Ordered Large Volume Scaffold Architectures.

    PubMed

    Wunner, Felix M; Wille, Marie-Luise; Noonan, Thomas G; Bas, Onur; Dalton, Paul D; De-Juan-Pardo, Elena M; Hutmacher, Dietmar W

    2018-05-01

    The additive manufacturing of highly ordered, micrometer-scale scaffolds is at the forefront of tissue engineering and regenerative medicine research. The fabrication of scaffolds for the regeneration of larger tissue volumes, in particular, remains a major challenge. A technology at the convergence of additive manufacturing and electrospinning-melt electrospinning writing (MEW)-is also limited in thickness/volume due to the accumulation of excess charge from the deposited material repelling and hence, distorting scaffold architectures. The underlying physical principles are studied that constrain MEW of thick, large volume scaffolds. Through computational modeling, numerical values variable working distances are established respectively, which maintain the electrostatic force at a constant level during the printing process. Based on the computational simulations, three voltage profiles are applied to determine the maximum height (exceeding 7 mm) of a highly ordered large volume scaffold. These thick MEW scaffolds have fully interconnected pores and allow cells to migrate and proliferate. To the best of the authors knowledge, this is the first study to report that z-axis adjustment and increasing the voltage during the MEW process allows for the fabrication of high-volume scaffolds with uniform morphologies and fiber diameters. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Simulating Cosmic Reionization and Its Observable Consequences

    NASA Astrophysics Data System (ADS)

    Shapiro, Paul

    2017-01-01

    I summarize recent progress in modelling the epoch of reionization by large- scale simulations of cosmic structure formation, radiative transfer and their interplay, which trace the ionization fronts that swept across the IGM, to predict observable signatures. Reionization by starlight from early galaxies affected their evolution, impacting reionization, itself, and imprinting the galaxies with a memory of reionization. Star formation suppression, e.g., may explain the observed underabundance of Local Group dwarfs relative to N-body predictions for Cold Dark Matter. I describe CoDa (''Cosmic Dawn''), the first fully-coupled radiation-hydrodynamical simulation of reionization and galaxy formation in the Local Universe, in a volume large enough to model reionization globally but with enough resolving power to follow all the atomic-cooling galactic halos in that volume. A 90 Mpc box was simulated from a constrained realization of primordial fluctuations, chosen to reproduce present-day features of the Local Group, including the Milky Way and M31, and the local universe beyond, including the Virgo cluster. The new RAMSES-CUDATON hybrid CPU-GPU code took 11 days to perform this simulation on the Titan supercomputer at Oak Ridge National Laboratory, with 4096-cubed N-body particles for the dark matter and 4096-cubed cells for the atomic gas and ionizing radiation.

  17. LLE review. Quarterly report, January 1994--March 1994, Volume 58

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, A.

    1994-07-01

    This volume of the LLE Review, covering the period Jan - Mar 1994, contains articles on backlighting diagnostics; the effect of electron collisions on ion-acoustic waves and heat flow; using PIC code simulations for analysis of ultrashort laser pulses interacting with solid targets; creating a new instrument for characterizing thick cryogenic layers; and a description of a large-aperture ring amplifier for laser-fusion drivers. Three of these articles - backlighting diagnostics; characterizing thick cryogenic layers; and large-aperture ring amplifier - are directly related to the OMEGA Upgrade, now under construction. Separate abstracts have been prepared for articles from this report.

  18. Characteristics of the mixing volume model with the interactions among spatially distributed particles for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2016-11-01

    The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.

  19. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu

    2014-05-12

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less

  20. Vaginal motion and bladder and rectal volumes during pelvic intensity-modulated radiation therapy after hysterectomy.

    PubMed

    Jhingran, Anuja; Salehpour, Mohammad; Sam, Marianne; Levy, Larry; Eifel, Patricia J

    2012-01-01

    To evaluate variations in bladder and rectal volume and the position of the vaginal vault during a 5-week course of pelvic intensity-modulated radiation therapy (IMRT) after hysterectomy. Twenty-four patients were instructed how to fill their bladders before simulation and treatment. These patients underwent computed tomography simulations with full and empty bladders and then underwent rescanning twice weekly during IMRT; patients were asked to have full bladder for treatment. Bladder and rectal volumes and the positions of vaginal fiducial markers were determined, and changes in volume and position were calculated. The mean full and empty bladder volumes at simulation were 480 cc (range, 122-1,052) and 155 cc (range, 49-371), respectively. Bladder volumes varied widely during IMRT: the median difference between the maximum and minimum volumes was 247 cc (range, 96-585). Variations in rectal volume during IMRT were less pronounced. For the 16 patients with vaginal fiducial markers in place throughout IMRT, the median maximum movement of the markers during IMRT was 0.59 cm in the right-left direction (range, 0-0.9), 1.46 cm in the anterior-posterior direction (range, 0.8-2.79), and 1.2 cm in the superior-inferior direction (range, 0.6-2.1). Large variations in rectal or bladder volume frequently correlated with significant displacement of the vaginal apex. Although treatment with a full bladder is usually preferred because of greater sparing of small bowel, our data demonstrate that even with detailed instruction, patients are unable to maintain consistent bladder filling. Variations in organ position during IMRT can result in marked changes in the position of the target volume and the volume of small bowel exposed to high doses of radiation. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Vaginal Motion and Bladder and Rectal Volumes During Pelvic Intensity-Modulated Radiation Therapy After Hysterectomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jhingran, Anuja, E-mail: ajhingra@mdanderson.org; Salehpour, Mohammad; Sam, Marianne

    2012-01-01

    Purpose: To evaluate variations in bladder and rectal volume and the position of the vaginal vault during a 5-week course of pelvic intensity-modulated radiation therapy (IMRT) after hysterectomy. Methods and Materials: Twenty-four patients were instructed how to fill their bladders before simulation and treatment. These patients underwent computed tomography simulations with full and empty bladders and then underwent rescanning twice weekly during IMRT; patients were asked to have full bladder for treatment. Bladder and rectal volumes and the positions of vaginal fiducial markers were determined, and changes in volume and position were calculated. Results: The mean full and empty bladdermore » volumes at simulation were 480 cc (range, 122-1,052) and 155 cc (range, 49-371), respectively. Bladder volumes varied widely during IMRT: the median difference between the maximum and minimum volumes was 247 cc (range, 96-585). Variations in rectal volume during IMRT were less pronounced. For the 16 patients with vaginal fiducial markers in place throughout IMRT, the median maximum movement of the markers during IMRT was 0.59 cm in the right-left direction (range, 0-0.9), 1.46 cm in the anterior-posterior direction (range, 0.8-2.79), and 1.2 cm in the superior-inferior direction (range, 0.6-2.1). Large variations in rectal or bladder volume frequently correlated with significant displacement of the vaginal apex. Conclusion: Although treatment with a full bladder is usually preferred because of greater sparing of small bowel, our data demonstrate that even with detailed instruction, patients are unable to maintain consistent bladder filling. Variations in organ position during IMRT can result in marked changes in the position of the target volume and the volume of small bowel exposed to high doses of radiation.« less

  2. A Direct Numerical Simulation of a Temporally Evolving Liquid-Gas Turbulent Mixing Layer

    NASA Astrophysics Data System (ADS)

    Vu, Lam Xuan; Chiodi, Robert; Desjardins, Olivier

    2017-11-01

    Air-blast atomization occurs when streams of co-flowing high speed gas and low speed liquid shear to form drops. Air-blast atomization has numerous industrial applications from combustion engines in jets to sprays used for medical coatings. The high Reynolds number and dynamic pressure ratio of a realistic air-blast atomization case requires large eddy simulation and the use of multiphase sub-grid scale (SGS) models. A direct numerical simulations (DNS) of a temporally evolving mixing layer is presented to be used as a base case from which future multiphase SGS models can be developed. To construct the liquid-gas mixing layer, half of a channel flow from Kim et al. (JFM, 1987) is placed on top of a static liquid layer that then evolves over time. The DNS is performed using a conservative finite volume incompressible multiphase flow solver where phase tracking is handled with a discretely conservative volume of fluid method. This study presents statistics on velocity and volume fraction at different Reynolds and Weber numbers.

  3. Large Eddy Simulation of wind turbine wakes: detailed comparisons of two codes focusing on effects of numerics and subgrid modeling

    NASA Astrophysics Data System (ADS)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-01

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  4. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  5. Optimisation of confinement in a fusion reactor using a nonlinear turbulence model

    NASA Astrophysics Data System (ADS)

    Highcock, E. G.; Mandell, N. R.; Barnes, M.

    2018-04-01

    The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.

  6. Deflagrations, Detonations, and the Deflagration-to-Detonation Transition in Methane-Air Mixtures

    DTIC Science & Technology

    2011-04-27

    we attempt to answer the question: Given a large enough volume of flammable mixture of NG and air, can a weak spark ignition develop into a...detonation? Large -scale numerical simulations, in conjunction with experimental work conducted at the National Institute for Occupational Safety and...12 2.3.3. Flame Acceleration and DDT in Channels with Obstacles . . . . . . . . . . . . . 14 2.3.4. DDT in Large Spaces

  7. Perturbative two- and three-loop coefficients from large β Monte Carlo

    NASA Astrophysics Data System (ADS)

    Lepage, G. P.; Mackenzie, P. B.; Shakespeare, N. H.; Trottier, H. D.

    Perturbative coefficients for Wilson loops and the static quark self-energy are extracted from Monte Carlo simulations at large β on finite volumes, where all the lattice momenta are large. The Monte Carlo results are in excellent agreement with perturbation theory through second order. New results for third order coefficients are reported. Twisted boundary conditions are used to eliminate zero modes and to suppress Z3 tunneling.

  8. Perturbative two- and three-loop coefficients from large b Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G.P. Lepage; P.B. Mackenzie; N.H. Shakespeare

    1999-10-18

    Perturbative coefficients for Wilson loops and the static quark self-energy are extracted from Monte Carlo simulations at large {beta} on finite volumes, where all the lattice momenta are large. The Monte Carlo results are in excellent agreement with perturbation theory through second order. New results for third order coefficients are reported. Twisted boundary conditions are used to eliminate zero modes and to suppress Z{sub 3} tunneling.

  9. Greenland-Wide Seasonal Temperatures During the Last Deglaciation

    NASA Astrophysics Data System (ADS)

    Buizert, C.; Keisling, B. A.; Box, J. E.; He, F.; Carlson, A. E.; Sinclair, G.; DeConto, R. M.

    2018-02-01

    The sensitivity of the Greenland ice sheet to climate forcing is of key importance in assessing its contribution to past and future sea level rise. Surface mass loss occurs during summer, and accounting for temperature seasonality is critical in simulating ice sheet evolution and in interpreting glacial landforms and chronologies. Ice core records constrain the timing and magnitude of climate change but are largely limited to annual mean estimates from the ice sheet interior. Here we merge ice core reconstructions with transient climate model simulations to generate Greenland-wide and seasonally resolved surface air temperature fields during the last deglaciation. Greenland summer temperatures peak in the early Holocene, consistent with records of ice core melt layers. We perform deglacial Greenland ice sheet model simulations to demonstrate that accounting for realistic temperature seasonality decreases simulated glacial ice volume, expedites the deglacial margin retreat, mutes the impact of abrupt climate warming, and gives rise to a clear Holocene ice volume minimum.

  10. A comprehensive Guyton model analysis of physiologic responses to preadapting the blood volume as a countermeasure to fluid shifts

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R. S.; Myrick, E. E.; Blomkalns, A. L.; Charles, J. B.

    1994-01-01

    The Guyton model of fluid, electrolyte, and circulatory regulation is an extensive mathematical model capable of simulating a variety of experimental conditions. It has been modified for use at NASA to simulate head-down tilt, a frequently used analog of weightlessness. Weightlessness causes a headward shift of body fluids that is believed to expand central blood volume, triggering a series of physiologic responses resulting in large losses of body fluids. We used the modified Guyton model to test the hypothesis that preadaptation of the blood volume before weightless exposure could counteract the central volume expansion caused by fluid shifts, and thereby attenuate the circulatory and renal responses that result in body fluid losses. Simulation results show that circulatory preadaptation, by a procedure resembling blood donation immediately before head-down bedrest, is effective in damping the physiologic responses to fluid shifts and reducing body fluid losses. After 10 hours of head-down tilt, preadaptation also produces higher blood volume, extracellular volume, and total body water for 20 to 30 days of bedrest, compared with non-preadapted control. These results indicate that circulatory preadaptation before current Space Shuttle missions may be beneficial for the maintenance of reentry and postflight orthostatic tolerance in astronauts. This paper presents a comprehensive examination of the simulation results pertaining to changes in relevant physiologic variables produced by blood volume reduction before a prolonged head-down tilt. The objectives were to study and develop the countermeasure theoretically, to aid in planning experimental studies of the countermeasure, and to identify potentially disadvantageous physiologic responses that may be caused by the countermeasure.

  11. The Impact of Varying the Physics Grid Resolution Relative to the Dynamical Core Resolution in CAM-SE-CSLAM

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.

    2017-12-01

    The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.

  12. Protein Simulation Data in the Relational Model.

    PubMed

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  13. Protein Simulation Data in the Relational Model

    PubMed Central

    Simms, Andrew M.; Daggett, Valerie

    2011-01-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost—significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server. PMID:23204646

  14. Calculation of the Frequency Distribution of the Energy Deposition in DNA Volumes by Heavy Ions

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cicinotta, Francis A.

    2012-01-01

    Radiation quality effects are largely determined by energy deposition in small volumes of characteristic sizes less than 10 nm representative of short-segments of DNA, the DNA nucleosome, or molecules initiating oxidative stress in the nucleus, mitochondria, or extra-cellular matrix. On this scale, qualitatively distinct types of molecular damage are possible for high linear energy transfer (LET) radiation such as heavy ions compared to low LET radiation. Unique types of DNA lesions or oxidative damages are the likely outcome of the energy deposition. The frequency distribution for energy imparted to 1-20 nm targets per unit dose or particle fluence is a useful descriptor and can be evaluated as a function of impact parameter from an ions track. In this work, the simulation of 1-Gy irradiation of a cubic volume of 5 micron by: 1) 450 (1)H(+) ions, 300 MeV; 2) 10 (12)C(6+) ions, 290 MeV/amu and 3) (56)Fe(26+) ions, 1000 MeV/amu was done with the Monte-Carlo simulation code RITRACKS. Cylindrical targets are generated in the irradiated volume, with random orientation. The frequency distribution curves of the energy deposited in the targets is obtained. For small targets (i.e. <25 nm size), the probability of an ion to hit a target is very small; therefore a large number of tracks and targets as well as a large number of histories are necessary to obtain statistically significant results. This simulation is very time-consuming and is difficult to perform by using the original version of RITRACKS. Consequently, the code RITRACKS was adapted to use multiple CPU on a workstation or on a computer cluster. To validate the simulation results, similar calculations were performed using targets with fixed position and orientation, for which experimental data are available [5]. Since the probability of single- and double-strand breaks in DNA as function of energy deposited is well know, the results that were obtained can be used to estimate the yield of DSB, and can be extended to include other targeted or non-target effects.

  15. Memory Network For Distributed Data Processors

    NASA Technical Reports Server (NTRS)

    Bolen, David; Jensen, Dean; Millard, ED; Robinson, Dave; Scanlon, George

    1992-01-01

    Universal Memory Network (UMN) is modular, digital data-communication system enabling computers with differing bus architectures to share 32-bit-wide data between locations up to 3 km apart with less than one millisecond of latency. Makes it possible to design sophisticated real-time and near-real-time data-processing systems without data-transfer "bottlenecks". This enterprise network permits transmission of volume of data equivalent to an encyclopedia each second. Facilities benefiting from Universal Memory Network include telemetry stations, simulation facilities, power-plants, and large laboratories or any facility sharing very large volumes of data. Main hub of UMN is reflection center including smaller hubs called Shared Memory Interfaces.

  16. Ku-band antenna acquisition and tracking performance study, volume 4

    NASA Technical Reports Server (NTRS)

    Huang, T. C.; Lindsey, W. C.

    1977-01-01

    The results pertaining to the tradeoff analysis and performance of the Ku-band shuttle antenna pointing and signal acquisition system are presented. The square, hexagonal and spiral antenna trajectories were investigated assuming the TDRS postulated uncertainty region and a flexible statistical model for the location of the TDRS within the uncertainty volume. The scanning trajectories, shuttle/TDRS signal parameters and dynamics, and three signal acquisition algorithms were integrated into a hardware simulation. The hardware simulation is quite flexible in that it allows for the evaluation of signal acquisition performance for an arbitrary (programmable) antenna pattern, a large range of C/N sub O's, various TDRS/shuttle a priori uncertainty distributions, and three distinct signal search algorithms.

  17. Simulating the interaction of the heliosphere with the local interstellar medium: MHD results from a finite volume approach, first bidimensional results

    NASA Technical Reports Server (NTRS)

    Chanteur, G.; Khanfir, R.

    1995-01-01

    We have designed a full compressible MHD code working on unstructured meshes in order to be able to compute accurately sharp structures embedded in large scale simulations. The code is based on a finite volume method making use of a kinetic flux splitting. A bidimensional version of the code has been used to simulate the interaction of a moving interstellar medium, magnetized or unmagnetized with a rotating and magnetized heliopspheric plasma source. Being aware that these computations are not realistic due to the restriction to two dimensions, we present it to demonstrate the ability of this new code to handle this problem. An axisymetric version, now under development, will be operational in a few months. Ultimately we plan to run a full 3d version.

  18. Simulation of elution profiles in liquid chromatography - II: Investigation of injection volume overload under gradient elution conditions applied to second dimension separations in two-dimensional liquid chromatography.

    PubMed

    Stoll, Dwight R; Sajulga, Ray W; Voigt, Bryan N; Larson, Eli J; Jeong, Lena N; Rutan, Sarah C

    2017-11-10

    An important research direction in the continued development of two-dimensional liquid chromatography (2D-LC) is to improve the detection sensitivity of the method. This is especially important in applications where injection of large volumes of effluent from the first dimension ( 1 D) column into the second dimension ( 2 D) column leads to severe 2 D peak broadening and peak shape distortion. For example, this is common when coupling two reversed-phase columns and the organic solvent content of the 1 D mobile phase overwhelms the 2 D column with each injection of 1 D effluent, leading to low resolution in the second dimension. In a previous study we validated a simulation approach based on the Craig distribution model and adapted from the work of Czok and Guiochon [1] that enabled accurate simulation of simple isocratic and gradient separations with very small injection volumes, and isocratic separations with mismatched injection and mobile phase solvents [2]. In the present study we have extended this simulation approach to simulate separations relevant to 2D-LC. Specifically, we have focused on simulating 2 D separations where gradient elution conditions are used, there is mismatch between the sample solvent and the starting point in the gradient elution program, injection volumes approach or even exceed the dead volume of the 2 D column, and the extent of sample loop filling is varied. To validate this simulation we have compared results from simulations and experiments for 101 different conditions, including variation in injection volume (0.4-80μL), loop filling level (25-100%), and degree of mismatch between sample organic solvent and the starting point in the gradient elution program (-20 to +20% ACN). We find that that the simulation is accurate enough (median errors in retention time and peak width of -1.0 and -4.9%, without corrections for extra-column dispersion) to be useful in guiding optimization of 2D-LC separations. However, this requires that real injection profiles obtained from 2D-LC interface valves are used to simulate the introduction of samples into the 2 D column. These profiles are highly asymmetric - simulation using simple rectangular pulses leads to peak widths that are far too narrow under many conditions. We believe the simulation approach developed here will be useful for addressing practical questions in the development of 2D-LC methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Stress and Strain in Silicon Electrode Models

    DOE PAGES

    Higa, Kenneth; Srinivasan, Venkat

    2015-03-24

    While the high capacity of silicon makes it an attractive negative electrode for Li-ion batteries, the associated large volume change results in fracture and capacity fade. Composite electrodes incorporating silicon have additional complexity, as active material is attached to surrounding material which must likewise experience significant volume change. In this paper, a finite-deformation model is used to explore, for the first time, mechanical interactions between a silicon particle undergoing lithium insertion, and attached binder material. Simulations employ an axisymmetric model system in which solutions vary in two spatial directions and shear stresses develop at interfaces between materials. The mechanical responsemore » of the amorphous active material is dependent on lithium concentration, and an equation of state incorporating reported volume expansion data is used. Simulations explore the influence of active material size and binder stiffness, and suggest delamination as an additional mode of material damage. Computed strain energies and von Mises equivalent stresses are in physically-relevant ranges, comparable to reported yield stresses and adhesion energies, and predicted trends are largely consistent with reported experimental results. It is hoped that insights from this work will support the design of more robust silicon composite electrodes.« less

  20. Vapor condensation onto a non-volatile liquid drop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inci, Levent; Bowles, Richard K., E-mail: richard.bowles@usask.ca

    2013-12-07

    Molecular dynamics simulations of miscible and partially miscible binary Lennard–Jones mixtures are used to study the dynamics and thermodynamics of vapor condensation onto a non-volatile liquid drop in the canonical ensemble. When the system volume is large, the driving force for condensation is low and only a submonolayer of the solvent is adsorbed onto the liquid drop. A small degree of mixing of the solvent phase into the core of the particles occurs for the miscible system. At smaller volumes, complete film formation is observed and the dynamics of film growth are dominated by cluster-cluster coalescence. Mixing into the coremore » of the droplet is also observed for partially miscible systems below an onset volume suggesting the presence of a solubility transition. We also develop a non-volatile liquid drop model, based on the capillarity approximations, that exhibits a solubility transition between small and large drops for partially miscible mixtures and has a hysteresis loop similar to the one observed in the deliquescence of small soluble salt particles. The properties of the model are compared to our simulation results and the model is used to study the formulation of classical nucleation theory for systems with low free energy barriers.« less

  1. Unstructured LES of Reacting Multiphase Flows in Realistic Gas Turbine Combustors

    NASA Technical Reports Server (NTRS)

    Ham, Frank; Apte, Sourabh; Iaccarino, Gianluca; Wu, Xiao-Hua; Herrmann, Marcus; Constantinescu, George; Mahesh, Krishnan; Moin, Parviz

    2003-01-01

    As part of the Accelerated Strategic Computing Initiative (ASCI) program, an accurate and robust simulation tool is being developed to perform high-fidelity LES studies of multiphase, multiscale turbulent reacting flows in aircraft gas turbine combustor configurations using hybrid unstructured grids. In the combustor, pressurized gas from the upstream compressor is reacted with atomized liquid fuel to produce the combustion products that drive the downstream turbine. The Large Eddy Simulation (LES) approach is used to simulate the combustor because of its demonstrated superiority over RANS in predicting turbulent mixing, which is central to combustion. This paper summarizes the accomplishments of the combustor group over the past year, concentrating mainly on the two major milestones achieved this year: 1) Large scale simulation: A major rewrite and redesign of the flagship unstructured LES code has allowed the group to perform large eddy simulations of the complete combustor geometry (all 18 injectors) with over 100 million control volumes; 2) Multi-physics simulation in complex geometry: The first multi-physics simulations including fuel spray breakup, coalescence, evaporation, and combustion are now being performed in a single periodic sector (1/18th) of an actual Pratt & Whitney combustor geometry.

  2. Stochastic theory of large-scale enzyme-reaction networks: Finite copy number corrections to rate equation models

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Straube, Arthur V.; Grima, Ramon

    2010-11-01

    Chemical reactions inside cells occur in compartment volumes in the range of atto- to femtoliters. Physiological concentrations realized in such small volumes imply low copy numbers of interacting molecules with the consequence of considerable fluctuations in the concentrations. In contrast, rate equation models are based on the implicit assumption of infinitely large numbers of interacting molecules, or equivalently, that reactions occur in infinite volumes at constant macroscopic concentrations. In this article we compute the finite-volume corrections (or equivalently the finite copy number corrections) to the solutions of the rate equations for chemical reaction networks composed of arbitrarily large numbers of enzyme-catalyzed reactions which are confined inside a small subcellular compartment. This is achieved by applying a mesoscopic version of the quasisteady-state assumption to the exact Fokker-Planck equation associated with the Poisson representation of the chemical master equation. The procedure yields impressively simple and compact expressions for the finite-volume corrections. We prove that the predictions of the rate equations will always underestimate the actual steady-state substrate concentrations for an enzyme-reaction network confined in a small volume. In particular we show that the finite-volume corrections increase with decreasing subcellular volume, decreasing Michaelis-Menten constants, and increasing enzyme saturation. The magnitude of the corrections depends sensitively on the topology of the network. The predictions of the theory are shown to be in excellent agreement with stochastic simulations for two types of networks typically associated with protein methylation and metabolism.

  3. Numerical Simulations of Homogeneous Turbulence Using Lagrangian-Averaged Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Mohseni, Kamran; Shkoller, Steve; Kosovic, Branko; Marsden, Jerrold E.; Carati, Daniele; Wray, Alan; Rogallo, Robert

    2000-01-01

    The Lagrangian-averaged Navier-Stokes (LANS) equations are numerically evaluated as a turbulence closure. They are derived from a novel Lagrangian averaging procedure on the space of all volume-preserving maps and can be viewed as a numerical algorithm which removes the energy content from the small scales (smaller than some a priori fixed spatial scale alpha) using a dispersive rather than dissipative mechanism, thus maintaining the crucial features of the large scale flow. We examine the modeling capabilities of the LANS equations for decaying homogeneous turbulence, ascertain their ability to track the energy spectrum of fully resolved direct numerical simulations (DNS), compare the relative energy decay rates, and compare LANS with well-accepted large eddy simulation (LES) models.

  4. Gravitational potential wells and the cosmic bulk flow

    NASA Astrophysics Data System (ADS)

    Wang, Yuyu; Kumar, Abhinav; Feldman, Hume; Watkins, Richard

    2016-03-01

    The bulk flow is a volume average of the peculiar velocities and a useful probe of the mass distribution on large scales. The gravitational instability model views the bulk flow as a potential flow that obeys a Maxwellian Distribution. We use two N-body simulations, the LasDamas Carmen and the Horizon Run, to calculate the bulk flows of various sized volumes in the simulation boxes. Once we have the bulk flow velocities as a function of scale, we investigate the mass and gravitational potential distribution around the volume. We found that matter densities can be asymmetrical and difficult to detect in real surveys, however, the gravitational potential and its gradient may provide better tools to investigate the underlying matter distribution. This study shows that bulk flows are indeed potential flows and thus provides information on the flow sources. We also show that bulk flow magnitudes follow a Maxwellian distribution on scales > 10h-1 Mpc.

  5. Large-Eddy Simulation of Internal Flow through Human Vocal Folds

    NASA Astrophysics Data System (ADS)

    Lasota, Martin; Šidlof, Petr

    2018-06-01

    The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.

  6. Discreteness-induced concentration inversion in mesoscopic chemical systems.

    PubMed

    Ramaswamy, Rajesh; González-Segredo, Nélido; Sbalzarini, Ivo F; Grima, Ramon

    2012-04-10

    Molecular discreteness is apparent in small-volume chemical systems, such as biological cells, leading to stochastic kinetics. Here we present a theoretical framework to understand the effects of discreteness on the steady state of a monostable chemical reaction network. We consider independent realizations of the same chemical system in compartments of different volumes. Rate equations ignore molecular discreteness and predict the same average steady-state concentrations in all compartments. However, our theory predicts that the average steady state of the system varies with volume: if a species is more abundant than another for large volumes, then the reverse occurs for volumes below a critical value, leading to a concentration inversion effect. The addition of extrinsic noise increases the size of the critical volume. We theoretically predict the critical volumes and verify, by exact stochastic simulations, that rate equations are qualitatively incorrect in sub-critical volumes.

  7. Metastable Prepores in Tension-Free Lipid Bilayers

    NASA Astrophysics Data System (ADS)

    Ting, Christina L.; Awasthi, Neha; Müller, Marcus; Hub, Jochen S.

    2018-03-01

    The formation and closure of aqueous pores in lipid bilayers is a key step in various biophysical processes. Large pores are well described by classical nucleation theory, but the free-energy landscape of small, biologically relevant pores has remained largely unexplored. The existence of small and metastable "prepores" was hypothesized decades ago from electroporation experiments, but resolving metastable prepores from theoretical models remained challenging. Using two complementary methods—atomistic simulations and self-consistent field theory of a minimal lipid model—we determine the parameters for which metastable prepores occur in lipid membranes. Both methods consistently suggest that pore metastability depends on the relative volume ratio between the lipid head group and lipid tails: lipids with a larger head-group volume fraction (or shorter saturated tails) form metastable prepores, whereas lipids with a smaller head-group volume fraction (or longer unsaturated tails) form unstable prepores.

  8. Annual Research Briefs: 1995

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This report contains the 1995 annual progress reports of the Research Fellows and students of the Center for Turbulence Research (CTR). In 1995 CTR continued its concentration on the development and application of large-eddy simulation to complex flows, development of novel modeling concepts for engineering computations in the Reynolds averaged framework, and turbulent combustion. In large-eddy simulation, a number of numerical and experimental issues have surfaced which are being addressed. The first group of reports in this volume are on large-eddy simulation. A key finding in this area was the revelation of possibly significant numerical errors that may overwhelm the effects of the subgrid-scale model. We also commissioned a new experiment to support the LES validation studies. The remaining articles in this report are concerned with Reynolds averaged modeling, studies of turbulence physics and flow generated sound, combustion, and simulation techniques. Fundamental studies of turbulent combustion using direct numerical simulations which started at CTR will continue to be emphasized. These studies and their counterparts carried out during the summer programs have had a noticeable impact on combustion research world wide.

  9. GENASIS Mathematics : Object-oriented manifolds, operations, and solvers for large-scale physics simulations

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2018-01-01

    The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.

  10. Neural networks within multi-core optic fibers

    PubMed Central

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-01-01

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks. PMID:27383911

  11. Neural networks within multi-core optic fibers.

    PubMed

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  12. A multi-scale network method for two-phase flow in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces withinmore » each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.« less

  13. Investigation of unsteadiness in Shock-particle cloud interaction: Fully resolved two-dimensional simulation and one-dimensional modeling

    NASA Astrophysics Data System (ADS)

    Hosseinzadeh-Nik, Zahra; Regele, Jonathan D.

    2015-11-01

    Dense compressible particle-laden flow, which has a complex nature, exists in various engineering applications. Shock waves impacting a particle cloud is a canonical problem to investigate this type of flow. It has been demonstrated that large flow unsteadiness is generated inside the particle cloud from the flow induced by the shock passage. It is desirable to develop models for the Reynolds stress to capture the energy contained in vortical structures so that volume-averaged models with point particles can be simulated accurately. However, the previous work used Euler equations, which makes the prediction of vorticity generation and propagation innacurate. In this work, a fully resolved two dimensional (2D) simulation using the compressible Navier-Stokes equations with a volume penalization method to model the particles has been performed with the parallel adaptive wavelet-collocation method. The results still show large unsteadiness inside and downstream of the particle cloud. A 1D model is created for the unclosed terms based upon these 2D results. The 1D model uses a two-phase simple low dissipation AUSM scheme (TSLAU) developed by coupled with the compressible two phase kinetic energy equation.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less

  15. Distributed database kriging for adaptive sampling (D²KAS)

    DOE PAGES

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...

    2015-03-18

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less

  16. Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida

    USGS Publications Warehouse

    Turner, J.F.

    1979-01-01

    A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)

  17. Plasma density injection and flow during coaxial helicity injection in a tokamak

    NASA Astrophysics Data System (ADS)

    Hooper, E. B.

    2018-02-01

    Whole device, resistive MHD simulations of spheromaks and tokamaks have used a large diffusion coefficient that maintains a nearly constant density throughout the device. In the present work, helicity and plasma are coinjected into a low-density plasma in a tokamak with a small diffusion coefficient. As in previous simulations [Hooper et al., Phys. Plasmas 20, 092510 (2013)], a flux bubble is formed, which expands to fill the tokamak volume. The injected plasma is non-uniform inside the bubble. The flow pattern is analyzed; when the simulation is not axisymmetric, an n = 1 mode on the surface of the bubble generates leakage of plasma into the low-density volume. Closed flux is generated following injection, as in experiments and previous simulations. The result provides a more detailed physics analysis of the injection, including density non-uniformities in the plasma that may affect its use as a startup plasma [Raman et al., Phys. Rev. Lett. 97, 175002 (2006)].

  18. Computational fluid dynamics study of viscous fingering in supercritical fluid chromatography.

    PubMed

    Subraveti, Sai Gokul; Nikrityuk, Petr; Rajendran, Arvind

    2018-01-26

    Axi-symmetric numerical simulations are carried out to study the dynamics of a plug introduced through a mixed-stream injection in supercritical fluid chromatographic columns. The computational fluid dynamics model developed in this work takes into account both the hydrodynamics and adsorption equilibria to describe the phenomena of viscous fingering and plug effect that contribute to peak distortions in mixed-stream injections. The model was implemented into commercial computational fluid dynamics software using user-defined functions. The simulations describe the propagation of both the solute and modifier highlighting the interplay between the hydrodynamics and plug effect. The simulated peaks showed good agreement with experimental data published in the literature involving different injection volumes (5 μL, 50 μL, 1 mL and 2 mL) of flurbiprofen on Chiralpak AD-H column using a mobile phase of CO 2 and methanol. The study demonstrates that while viscous fingering is the main source of peak distortions for large-volume injections (1 mL and 2 mL) it has negligible impact on small-volume injections (5 μL and 50 μL). Band broadening in small-volume injections arise mainly due to the plug effect. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  19. Interstitial solute transport in 3D reconstructed neuropil occurs by diffusion rather than bulk flow.

    PubMed

    Holter, Karl Erik; Kehlet, Benjamin; Devor, Anna; Sejnowski, Terrence J; Dale, Anders M; Omholt, Stig W; Ottersen, Ole Petter; Nagelhus, Erlend Arnulf; Mardal, Kent-André; Pettersen, Klas H

    2017-09-12

    The brain lacks lymph vessels and must rely on other mechanisms for clearance of waste products, including amyloid [Formula: see text] that may form pathological aggregates if not effectively cleared. It has been proposed that flow of interstitial fluid through the brain's interstitial space provides a mechanism for waste clearance. Here we compute the permeability and simulate pressure-mediated bulk flow through 3D electron microscope (EM) reconstructions of interstitial space. The space was divided into sheets (i.e., space between two parallel membranes) and tunnels (where three or more membranes meet). Simulation results indicate that even for larger extracellular volume fractions than what is reported for sleep and for geometries with a high tunnel volume fraction, the permeability was too low to allow for any substantial bulk flow at physiological hydrostatic pressure gradients. For two different geometries with the same extracellular volume fraction the geometry with the most tunnel volume had [Formula: see text] higher permeability, but the bulk flow was still insignificant. These simulation results suggest that even large molecule solutes would be more easily cleared from the brain interstitium by diffusion than by bulk flow. Thus, diffusion within the interstitial space combined with advection along vessels is likely to substitute for the lymphatic drainage system in other organs.

  20. Hydrothermal fluid flow and deformation in large calderas: Inferences from numerical simulations

    USGS Publications Warehouse

    Hurwitz, S.; Christiansen, L.B.; Hsieh, P.A.

    2007-01-01

    Inflation and deflation of large calderas is traditionally interpreted as being induced by volume change of a discrete source embedded in an elastic or viscoelastic half-space, though it has also been suggested that hydrothermal fluids may play a role. To test the latter hypothesis, we carry out numerical simulations of hydrothermal fluid flow and poroelastic deformation in calderas by coupling two numerical codes: (1) TOUGH2 [Pruess et al., 1999], which simulates flow in porous or fractured media, and (2) BIOT2 [Hsieh, 1996], which simulates fluid flow and deformation in a linearly elastic porous medium. In the simulations, high-temperature water (350??C) is injected at variable rates into a cylinder (radius 50 km, height 3-5 km). A sensitivity analysis indicates that small differences in the values of permeability and its anisotropy, the depth and rate of hydrothermal injection, and the values of the shear modulus may lead to significant variations in the magnitude, rate, and geometry of ground surface displacement, or uplift. Some of the simulated uplift rates are similar to observed uplift rates in large calderas, suggesting that the injection of aqueous fluids into the shallow crust may explain some of the deformation observed in calderas.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, David; Kress, James; Choi, Jong

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  2. Optical architecture design for detection of absorbers embedded in visceral fat.

    PubMed

    Francis, Robert; Florence, James; MacFarlane, Duncan

    2014-05-01

    Optically absorbing ducts embedded in scattering adipose tissue can be injured during laparoscopic surgery. Non-sequential simulations and theoretical analysis compare optical system configurations for detecting these absorbers. For absorbers in deep scattering volumes, trans-illumination is preferred instead of diffuse reflectance. For improved contrast, a scanning source with a large area detector is preferred instead of a large area source with a pixelated detector.

  3. Optical architecture design for detection of absorbers embedded in visceral fat

    PubMed Central

    Francis, Robert; Florence, James; MacFarlane, Duncan

    2014-01-01

    Optically absorbing ducts embedded in scattering adipose tissue can be injured during laparoscopic surgery. Non-sequential simulations and theoretical analysis compare optical system configurations for detecting these absorbers. For absorbers in deep scattering volumes, trans-illumination is preferred instead of diffuse reflectance. For improved contrast, a scanning source with a large area detector is preferred instead of a large area source with a pixelated detector. PMID:24877008

  4. Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses

    NASA Astrophysics Data System (ADS)

    Rutkowska, Eva; Baker, Colin; Nahum, Alan

    2010-04-01

    A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.

  5. Lysine production from methanol at 50 degrees C using Bacillus methanolicus: Modeling volume control, lysine concentration, and productivity using a three-phase continuous simulation.

    PubMed

    Lee, G H; Hur, W; Bremmon, C E; Flickinger, M C

    1996-03-20

    A simulation was developed based on experimental data obtained in a 14-L reactor to predict the growth and L-lysine accumulation kinetics, and change in volume of a large-scale (250-m(3)) Bacillus methanolicus methanol-based process. Homoserine auxotrophs of B. methanolicus MGA3 are unique methylotrophs because of the ability to secrete lysine during aerobic growth and threonine starvation at 50 degrees C. Dissolved methanol (100 mM), pH, dissolved oxygen tension (0.063 atm), and threonine levels were controlled to obtain threonine-limited conditions and high-cell density (25 g dry cell weight/L) in a 14-L reactor. As a fed-batch process, the additions of neat methanol (fed on demand), threonine, and other nutrients cause the volume of the fermentation to increase and the final lysine concentration to decrease. In addition, water produced as a result of methanol metabolism contributes to the increase in the volume of the reactor. A three-phase approach was used to predict the rate of change of culture volume based on carbon dioxide production and methanol consumption. This model was used for the evaluation of volume control strategies to optimize lysine productivity. A constant volume reactor process with variable feeding and continuous removal of broth and cells (VF(cstr)) resulted in higher lysine productivity than a fed-batch process without volume control. This model predicts the variation in productivity of lysine with changes in growth and in specific lysine productivity. Simple modifications of the model allows one to investigate other high-lysine-secreting strains with different growth and lysine productivity characteristics. Strain NOA2#13A5-2 which secretes lysine and other end-products were modeled using both growth and non-growth-associated lysine productivity. A modified version of this model was used to simulate the change in culture volume of another L-lysine producing mutant (NOA2#13A52-8A66) with reduced secretion of end-products. The modified simulation indicated that growth-associated production dominates in strain NOA2#13A52-8A66. (c) 1996 John Wiley & Sons, Inc.

  6. Statistical effects related to low numbers of reacting molecules analyzed for a reversible association reaction A + B = C in ideally dispersed systems: An apparent violation of the law of mass action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.

    2016-03-28

    Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less

  7. Simulation of all-scale atmospheric dynamics on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Szmelter, Joanna; Xiao, Feng

    2016-10-01

    The advance of massively parallel computing in the nineteen nineties and beyond encouraged finer grid intervals in numerical weather-prediction models. This has improved resolution of weather systems and enhanced the accuracy of forecasts, while setting the trend for development of unified all-scale atmospheric models. This paper first outlines the historical background to a wide range of numerical methods advanced in the process. Next, the trend is illustrated with a technical review of a versatile nonoscillatory forward-in-time finite-volume (NFTFV) approach, proven effective in simulations of atmospheric flows from small-scale dynamics to global circulations and climate. The outlined approach exploits the synergy of two specific ingredients: the MPDATA methods for the simulation of fluid flows based on the sign-preserving properties of upstream differencing; and the flexible finite-volume median-dual unstructured-mesh discretisation of the spatial differential operators comprising PDEs of atmospheric dynamics. The paper consolidates the concepts leading to a family of generalised nonhydrostatic NFTFV flow solvers that include soundproof PDEs of incompressible Boussinesq, anelastic and pseudo-incompressible systems, common in large-eddy simulation of small- and meso-scale dynamics, as well as all-scale compressible Euler equations. Such a framework naturally extends predictive skills of large-eddy simulation to the global atmosphere, providing a bottom-up alternative to the reverse approach pursued in the weather-prediction models. Theoretical considerations are substantiated by calculations attesting to the versatility and efficacy of the NFTFV approach. Some prospective developments are also discussed.

  8. Lennard-Jones type pair-potential method for coarse-grained lipid bilayer membrane simulations in LAMMPS

    NASA Astrophysics Data System (ADS)

    Fu, S.-P.; Peng, Z.; Yuan, H.; Kfoury, R.; Young, Y.-N.

    2017-01-01

    Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiencies have been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 ∼ 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane, such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity Yuan et al. (2010). In this work we implement such an interaction potential in LAMMPS to simulate large-scale lipid systems such as a giant unilamellar vesicle (GUV) and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for RBC dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. To demonstrate that the proposed methods do capture the observed dynamics of vesicles and RBCs, we focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with enclosed volume; 2. RBC shape transitions with different enclosed volume. Finally utilizing the parallel computing capability in LAMMPS, we provide some timing results for parallel coarse-grained simulations to illustrate that it is possible to use LAMMPS to simulate large-scale realistic complex biological membranes for more than 1 ms.

  9. Simulation of streamflow, evapotranspiration, and groundwater recharge in the Lower Frio River watershed, south Texas, 1961-2008

    USGS Publications Warehouse

    Lizarraga, Joy S.; Ockerman, Darwin J.

    2011-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, Fort Worth District; the City of Corpus Christi; the Guadalupe-Blanco River Authority; the San Antonio River Authority; and the San Antonio Water System, configured, calibrated, and tested a watershed model for a study area consisting of about 5,490 mi2 of the Frio River watershed in south Texas. The purpose of the model is to contribute to the understanding of watershed processes and hydrologic conditions in the lower Frio River watershed. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge by using a numerical representation of physical characteristics of the landscape, and meteorological and streamflow data. Additional time-series inputs to the model include wastewater-treatment-plant discharges, surface-water withdrawals, and estimated groundwater inflow from Leona Springs. Model simulations of streamflow, ET, and groundwater recharge were done for various periods of record depending upon available measured data for input and comparison, starting as early as 1961. Because of the large size of the study area, the lower Frio River watershed was divided into 12 subwatersheds; separate Hydrological Simulation Program-FORTRAN models were developed for each subwatershed. Simulation of the overall study area involved running simulations in downstream order. Output from the model was summarized by subwatershed, point locations, reservoir reaches, and the Carrizo-Wilcox aquifer outcrop. Four long-term U.S. Geological Survey streamflow-gaging stations and two short-term streamflow-gaging stations were used for streamflow model calibration and testing with data from 1991-2008. Calibration was based on data from 2000-08, and testing was based on data from 1991-99. Choke Canyon Reservoir stage data from 1992-2008 and monthly evaporation estimates from 1999-2008 also were used for model calibration. Additionally, 2006-08 ET data from a U.S. Geological Survey meteorological station in Medina County were used for calibration. Streamflow and ET calibration were considered good or very good. For the 2000-08 calibration period, total simulated flow volume and the flow volume of the highest 10 percent of simulated daily flows were calibrated to within about 10 percent of measured volumes at six U.S. Geological Survey streamflow-gaging stations. The flow volume of the lowest 50 percent of daily flows was not simulated as accurately but represented a small percent of the total flow volume. The model-fit efficiency for the weekly mean streamflow during the calibration periods ranged from 0.60 to 0.91, and the root mean square error ranged from 16 to 271 percent of the mean flow rate. The simulated total flow volumes during the testing periods at the long-term gaging stations exceeded the measured total flow volumes by approximately 22 to 50 percent at three stations and were within 7 percent of the measured total flow volumes at one station. For the longer 1961-2008 simulation period at the long-term stations, simulated total flow volumes were within about 3 to 18 percent of measured total flow volumes. The calibrations made by using Choke Canyon reservoir volume for 1992-2008, reservoir evaporation for 1999-2008, and ET in Medina County for 2006-08, are considered very good. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to better quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error. A sensitivity analysis was performed for the Upper San Miguel subwatershed model to show the effect of changes to model parameters on the estimated mean recharge, ET, and surface runoff from that part of the Carrizo-Wilcox aquifer outcrop. Simulated recharge was most sensitive to the changes in the lower-zone ET (LZ

  10. Metastable Prepores in Tension-Free Lipid Bilayers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ting, Christina L.; Awasthi, Neha; Muller, Marcus

    The formation and closure of aqueous pores in lipid bilayers is a key step in various biophysical processes. Large pores are well described by classical nucleation theory, but the free-energy landscape of small, biologically relevant pores has remained largely unexplored. The existence of small and metastable “prepores” was hypothesized decades ago from electroporation experiments, but resolving metastable prepores from theoretical models remained challenging. Using two complementary methods—atomistic simulations and self-consistent field theory of a minimal lipid model—we determine the parameters for which metastable prepores occur in lipid membranes. Here, both methods consistently suggest that pore metastability depends on the relativemore » volume ratio between the lipid head group and lipid tails: lipids with a larger head-group volume fraction (or shorter saturated tails) form metastable prepores, whereas lipids with a smaller head-group volume fraction (or longer unsaturated tails) form unstable prepores.« less

  11. Metastable Prepores in Tension-Free Lipid Bilayers

    DOE PAGES

    Ting, Christina L.; Awasthi, Neha; Muller, Marcus; ...

    2018-03-23

    The formation and closure of aqueous pores in lipid bilayers is a key step in various biophysical processes. Large pores are well described by classical nucleation theory, but the free-energy landscape of small, biologically relevant pores has remained largely unexplored. The existence of small and metastable “prepores” was hypothesized decades ago from electroporation experiments, but resolving metastable prepores from theoretical models remained challenging. Using two complementary methods—atomistic simulations and self-consistent field theory of a minimal lipid model—we determine the parameters for which metastable prepores occur in lipid membranes. Here, both methods consistently suggest that pore metastability depends on the relativemore » volume ratio between the lipid head group and lipid tails: lipids with a larger head-group volume fraction (or shorter saturated tails) form metastable prepores, whereas lipids with a smaller head-group volume fraction (or longer unsaturated tails) form unstable prepores.« less

  12. Using simulation to design a central sterilization department.

    PubMed

    Lin, Feng; Lawley, Mark; Spry, Charlie; McCarthy, Kelly; Coyle-Rogers, Patricia G; Yih, Yuehwern

    2008-10-01

    A simulation project was performed to assist with redesign of the surgery department of a large tertiary hospital and to help administrators make the best decisions about relocating, staffing, and equipping the central sterilization department. A simulation model was created to analyze department configurations, staff schedules, equipment capacities, and cart-washing requirements. Performance measures examined include tray turnaround time, surgery-delay rate, and work-in-process levels. The analysis provides significant insight into how the proposed system will perform, allowing planning for expected patient volume increases. This work illustrates how simulation can facilitate the design of a central sterilization department and improve surgical sterilization operations.

  13. Flowfield predictions for multiple body launch vehicles

    NASA Technical Reports Server (NTRS)

    Deese, Jerry E.; Pavish, D. L.; Johnson, Jerry G.; Agarwal, Ramesh K.; Soni, Bharat K.

    1992-01-01

    A method is developed for simulating inviscid and viscous flow around multicomponent launch vehicles. Grids are generated by the GENIE general-purpose grid-generation code, and the flow solver is a finite-volume Runge-Kutta time-stepping method. Turbulence effects are simulated using Baldwin and Lomax (1978) turbulence model. Calculations are presented for three multibody launch vehicle configurations: one with two small-diameter solid motors, one with nine small-diameter solid motors, and one with three large-diameter solid motors.

  14. A scaling relationship for impact-induced melt volume

    NASA Astrophysics Data System (ADS)

    Nakajima, M.; Rubie, D. C.; Melosh, H., IV; Jacobson, S. A.; Golabek, G.; Nimmo, F.; Morbidelli, A.

    2016-12-01

    During the late stages of planetary accretion, protoplanets experience a number of giant impacts and extensive mantle melting. The impactor's core sinks through the molten part of the target mantle (magma ocean) and experiences metal-silicate partitioning (e.g., Stevenson, 1990). For understanding the chemical evolution of the planetary mantle and core, we need to determine the impact-induced melt volume because the partitioning strongly depends on the ranges of the pressures and temperatures within the magma ocean. Previous studies have investigated the effects of small impacts (i.e. impact cratering) on melt volume, but those for giant impacts are not well understood yet. Here, we perform giant impact simulations to derive a scaling law for melt volume as a function of impact velocity, impact angle, and impactor-to-target mass ratio. We use two different numerical codes, namely smoothed particle hydrodynamics we developed (SPH, a particle method) and the code iSALE (a grid-based method) to compare their outcomes. Our simulations show that these two codes generally agree as long as the same equation of state is used. We also find that some of the previous studies developed for small impacts (e.g., Abramov et al., 2012) overestimate giant impact melt volume by orders of magnitudes partly because these models do not consider self-gravity of the impacting bodies. Therefore, these models may not be extrapolated to large impacts. Our simulations also show that melt volume can be scaled by the total mass of the system. In this presentation, we further discuss geochemical implications for giant impacts on planets, including Earth and Mars.

  15. Perturbative approach to covariance matrix of the matter power spectrum

    NASA Astrophysics Data System (ADS)

    Mohammed, Irshad; Seljak, Uroš; Vlah, Zvonimir

    2017-04-01

    We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (supersample variance) and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10 per cent level up to k ˜ 1 h Mpc-1. We show that all the connected components are dominated by the large-scale modes (k < 0.1 h Mpc-1), regardless of the value of the wave vectors k, k΄ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher k, it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.

  16. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    PubMed Central

    Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long

    2012-01-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749

  17. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  18. Collection of Calibration and Validation Data for an Airport Landside Dynamic Simulation Model.

    DTIC Science & Technology

    1980-04-01

    movements. The volume of skiers passing through Denver is sufficiently large to warrant the installation of special check-in counters for passengers with...Terminal, only seven sectors were used. Training Procedures MIA was the first of the three airports surveyed. A substantial amount of knowledge and

  19. epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.

    PubMed

    Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa

    2016-12-01

    Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  20. The MICE grand challenge lightcone simulation - I. Dark matter clustering

    NASA Astrophysics Data System (ADS)

    Fosalba, P.; Crocce, M.; Gaztañaga, E.; Castander, F. J.

    2015-04-01

    We present a new N-body simulation from the Marenostrum Institut de Ciències de l'Espai (MICE) collaboration, the MICE Grand Challenge (MICE-GC), containing about 70 billion dark matter particles in a (3 Gpc h-1)3 comoving volume. Given its large volume and fine spatial resolution, spanning over five orders of magnitude in dynamic range, it allows an accurate modelling of the growth of structure in the universe from the linear through the highly non-linear regime of gravitational clustering. We validate the dark matter simulation outputs using 3D and 2D clustering statistics, and discuss mass-resolution effects in the non-linear regime by comparing to previous simulations and the latest numerical fits. We show that the MICE-GC run allows for a measurement of the BAO feature with per cent level accuracy and compare it to state-of-the-art theoretical models. We also use sub-arcmin resolution pixelized 2D maps of the dark matter counts in the lightcone to make tomographic analyses in real and redshift space. Our analysis shows the simulation reproduces the Kaiser effect on large scales, whereas we find a significant suppression of power on non-linear scales relative to the real space clustering. We complete our validation by presenting an analysis of the three-point correlation function in this and previous MICE simulations, finding further evidence for mass-resolution effects. This is the first of a series of three papers in which we present the MICE-GC simulation, along with a wide and deep mock galaxy catalogue built from it. This mock is made publicly available through a dedicated web portal, http://cosmohub.pic.es.

  1. Numerical study of wind over breaking waves and generation of spume droplets

    NASA Astrophysics Data System (ADS)

    Yang, Zixuan; Tang, Shuai; Dong, Yu-Hong; Shen, Lian

    2017-11-01

    We present direct numerical simulation (DNS) results on wind over breaking waves. The air and water are simulated as a coherent system. The air-water interface is captured using a coupled level-set and volume-of-fluid method. The initial condition for the simulation is fully-developed wind turbulence over strongly-forced steep waves. Because wave breaking is an unsteady process, we use ensemble averaging of a large number of runs to obtain turbulence statistics. The generation and transport of spume droplets during wave breaking is also simulated. The trajectories of sea spray droplets are tracked using a Lagrangian particle tracking method. The generation of droplets is captured using a kinematic criterion based on the relative velocity of fluid particles of water with respect to the wave phase speed. From the simulation, we observe that the wave plunging generates a large vortex in air, which makes an important contribution to the suspension of sea spray droplets.

  2. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  3. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE PAGES

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; ...

    2016-12-07

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less

  4. Evaluating lossy data compression on climate simulation data within a large ensemble

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter

    2016-12-01

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.

  5. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less

  6. Numerical simulations of loop quantum Bianchi-I spacetimes

    NASA Astrophysics Data System (ADS)

    Diener, Peter; Joe, Anton; Megevand, Miguel; Singh, Parampreet

    2017-05-01

    Due to the numerical complexities of studying evolution in an anisotropic quantum spacetime, in comparison to the isotropic models, the physics of loop quantized anisotropic models has remained largely unexplored. In particular, robustness of bounce and the validity of effective dynamics have so far not been established. Our analysis fills these gaps for the case of vacuum Bianchi-I spacetime. To efficiently solve the quantum Hamiltonian constraint we perform an implementation of the Cactus framework which is conventionally used for applications in numerical relativity. Using high performance computing, numerical simulations for a large number of initial states with a wide variety of fluctuations are performed. Big bang singularity is found to be replaced by anisotropic bounces for all the cases. We find that for initial states which are sharply peaked at the late times in the classical regime and bounce at a mean volume much greater than the Planck volume, effective dynamics is an excellent approximation to the underlying quantum dynamics. Departures of the effective dynamics from the quantum evolution appear for the states probing deep Planck volumes. A detailed analysis of the behavior of this departure reveals a non-monotonic and subtle dependence on fluctuations of the initial states. We find that effective dynamics in almost all of the cases underestimates the volume and hence overestimates the curvature at the bounce, a result in synergy with earlier findings in the isotropic case. The expansion and shear scalars are found to be bounded throughout the evolution.

  7. Sensitivity analysis of some critical factors affecting simulated intrusion volumes during a low pressure transient event in a full-scale water distribution system.

    PubMed

    Ebacher, G; Besner, M C; Clément, B; Prévost, M

    2012-09-01

    Intrusion events caused by transient low pressures may result in the contamination of a water distribution system (DS). This work aims at estimating the range of potential intrusion volumes that could result from a real downsurge event caused by a momentary pump shutdown. A model calibrated with transient low pressure recordings was used to simulate total intrusion volumes through leakage orifices and submerged air vacuum valves (AVVs). Four critical factors influencing intrusion volumes were varied: the external head of (untreated) water on leakage orifices, the external head of (untreated) water on submerged air vacuum valves, the leakage rate, and the diameter of AVVs' outlet orifice (represented by a multiplicative factor). Leakage orifices' head and AVVs' orifice head levels were assessed through fieldwork. Two sets of runs were generated as part of two statistically designed experiments. A first set of 81 runs was based on a complete factorial design in which each factor was varied over 3 levels. A second set of 40 runs was based on a latin hypercube design, better suited for experimental runs on a computer model. The simulations were conducted using commercially available transient analysis software. Responses, measured by total intrusion volumes, ranged from 10 to 366 L. A second degree polynomial was used to analyze the total intrusion volumes. Sensitivity analyses of both designs revealed that the relationship between the total intrusion volume and the four contributing factors is not monotonic, with the AVVs' orifice head being the most influential factor. When intrusion through both pathways occurs concurrently, interactions between the intrusion flows through leakage orifices and submerged AVVs influence intrusion volumes. When only intrusion through leakage orifices is considered, the total intrusion volume is more largely influenced by the leakage rate than by the leakage orifices' head. The latter mainly impacts the extent of the area affected by intrusion. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Finite deformation of incompressible fiber-reinforced elastomers: A computational micromechanics approach

    NASA Astrophysics Data System (ADS)

    Moraleda, Joaquín; Segurado, Javier; LLorca, Javier

    2009-09-01

    The in-plane finite deformation of incompressible fiber-reinforced elastomers was studied using computational micromechanics. Composite microstructure was made up of a random and homogeneous dispersion of aligned rigid fibers within a hyperelastic matrix. Different matrices (Neo-Hookean and Gent), fibers (monodisperse or polydisperse, circular or elliptical section) and reinforcement volume fractions (10-40%) were analyzed through the finite element simulation of a representative volume element of the microstructure. A successive remeshing strategy was employed when necessary to reach the large deformation regime in which the evolution of the microstructure influences the effective properties. The simulations provided for the first time "quasi-exact" results of the in-plane finite deformation for this class of composites, which were used to assess the accuracy of the available homogenization estimates for incompressible hyperelastic composites.

  9. Transition to Quantum Turbulence and the Propagation of Vortex Loops at Finite Temperatures

    NASA Astrophysics Data System (ADS)

    Yamamoto, Shinji; Adachi, Hiroyuki; Tsubota, Makoto

    2011-02-01

    We performed numerical simulation of the transition to quantum turbulence and the propagation of vortex loops at finite temperatures in order to understand the experiments using vibrating wires in superfluid 4He by Yano et al. We injected vortex rings to a finite volume in order to simulate emission of vortices from the wire. When the injected vortices are dilute, they should decay by mutual friction. When they are dense, however, vortex tangle are generated through vortex reconnections and emit large vortex loops. The large vortex loops can travel a long distance before disappearing, which is much different from the dilute case. The numerical results are consistent with the experimental results.

  10. Large-eddy simulation of propeller noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Mahesh, Krishnan

    2016-11-01

    We will discuss our ongoing work towards developing the capability to predict far field sound from the large-eddy simulation of propellers. A porous surface Ffowcs-Williams and Hawkings (FW-H) acoustic analogy, with a dynamic endcapping method (Nitzkorski and Mahesh, 2014) is developed for unstructured grids in a rotating frame of reference. The FW-H surface is generated automatically using Delaunay triangulation and is representative of the underlying volume mesh. The approach is validated for tonal trailing edge sound from a NACA 0012 airfoil. LES of flow around a propeller at design advance ratio is compared to experiment and good agreement is obtained. Results for the emitted far field sound will be discussed. This work is supported by ONR.

  11. How far does the CO2 travel beyond a leaky point?

    NASA Astrophysics Data System (ADS)

    Kong, X.; Delshad, M.; Wheeler, M.

    2012-12-01

    Xianhui Kong, Mojdeh Delshad, Mary F. Wheeler The University of Texas at Austin Numerous research studies have been carried out to investigate the long term feasibility of safe storage of large volumes of CO2 in subsurface saline aquifers. The injected CO2 will undergo complex petrophysical and geochemical processes. During these processes, part of CO2 will be trapped while some will remain as a mobile phase, causing a leakage risk. The comprehensive and accurate characterizations of the trapping and leakage mechanisms are critical for accessing the safety of sequestration, and are challenges in this research area. We have studied different leakage scenarios using realistic aquifer properties including heterogeneity and put forward a comprehensive trapping model for CO2 in deep saline aquifer. The reservoir models include several geological layers and caprocks up to the near surface. Leakage scenarios, such as fracture, high permeability pathways, abandoned wells, are studied. In order to accurately model the fractures, very fine grids are needed near the fracture. Considering that the aquifer usually has a large volume and reservoir model needs large number of grid blocks, simulation would be computational expensive. To deal with this challenge, we carried out the simulations using our in-house parallel reservoir simulator. Our study shows the significance of capillary pressure and permeability-porosity variations on CO2 trapping and leakage. The improved understanding on trapping and leakage will provide confidence in future implementation of sequestration projects.

  12. Enhanced catalytic activity through the tuning of micropore environment and supercritical CO2 processing: Al(porphyrin)-based porous organic polymers for the degradation of a nerve agent simulant.

    PubMed

    Totten, Ryan K; Kim, Ye-Seong; Weston, Mitchell H; Farha, Omar K; Hupp, Joseph T; Nguyen, SonBinh T

    2013-08-14

    An Al(porphyrin) functionalized with a large axial ligand was incorporated into a porous organic polymer (POP) using a cobalt-catalyzed acetylene trimerization strategy. Removal of the axial ligand afforded a microporous POP that is catalytically active in the methanolysis of a nerve agent simulant. Supercritical CO2 processing of the POP dramatically increased the pore size and volume, allowing for significantly higher catalytic activities.

  13. I = 1 and I = 2 π-π scattering phase shifts from Nf = 2 + 1 lattice QCD

    NASA Astrophysics Data System (ADS)

    Bulava, John; Fahy, Brendan; Hörz, Ben; Juge, Keisuke J.; Morningstar, Colin; Wong, Chik Him

    2016-09-01

    The I = 1 p-wave and I = 2 s-wave elastic π-π scattering amplitudes are calculated from a first-principles lattice QCD simulation using a single ensemble of gauge field configurations with Nf = 2 + 1 dynamical flavors of anisotropic clover-improved Wilson fermions. This ensemble has a large spatial volume V =(3.7 fm)3, pion mass mπ = 230 MeV, and spatial lattice spacing as = 0.11 fm. Calculation of the necessary temporal correlation matrices is efficiently performed using the stochastic LapH method, while the large volume enables an improved energy resolution compared to previous work. For this single ensemble we obtain mρ /mπ = 3.350 (24), gρππ = 5.99 (26), and a clear signal for the I = 2 s-wave. The success of the stochastic LapH method in this proof-of-principle large-volume calculation paves the way for quantitative study of the lattice spacing effects and quark mass dependence of scattering amplitudes using state-of-the-art ensembles.

  14. Generation of a large volume of clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles for cell culture studies

    PubMed Central

    Ingham, Eileen; Fisher, John; Tipper, Joanne L

    2014-01-01

    It has recently been shown that the wear of ultra-high-molecular-weight polyethylene in hip and knee prostheses leads to the generation of nanometre-sized particles, in addition to micron-sized particles. The biological activity of nanometre-sized ultra-high-molecular-weight polyethylene wear particles has not, however, previously been studied due to difficulties in generating sufficient volumes of nanometre-sized ultra-high-molecular-weight polyethylene wear particles suitable for cell culture studies. In this study, wear simulation methods were investigated to generate a large volume of endotoxin-free clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles. Both single-station and six-station multidirectional pin-on-plate wear simulators were used to generate ultra-high-molecular-weight polyethylene wear particles under sterile and non-sterile conditions. Microbial contamination and endotoxin levels in the lubricants were determined. The results indicated that microbial contamination was absent and endotoxin levels were low and within acceptable limits for the pharmaceutical industry, when a six-station pin-on-plate wear simulator was used to generate ultra-high-molecular-weight polyethylene wear particles in a non-sterile environment. Different pore-sized polycarbonate filters were investigated to isolate nanometre-sized ultra-high-molecular-weight polyethylene wear particles from the wear test lubricants. The use of the filter sequence of 10, 1, 0.1, 0.1 and 0.015 µm pore sizes allowed successful isolation of ultra-high-molecular-weight polyethylene wear particles with a size range of < 100 nm, which was suitable for cell culture studies. PMID:24658586

  15. Generation of a large volume of clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles for cell culture studies.

    PubMed

    Liu, Aiqin; Ingham, Eileen; Fisher, John; Tipper, Joanne L

    2014-04-01

    It has recently been shown that the wear of ultra-high-molecular-weight polyethylene in hip and knee prostheses leads to the generation of nanometre-sized particles, in addition to micron-sized particles. The biological activity of nanometre-sized ultra-high-molecular-weight polyethylene wear particles has not, however, previously been studied due to difficulties in generating sufficient volumes of nanometre-sized ultra-high-molecular-weight polyethylene wear particles suitable for cell culture studies. In this study, wear simulation methods were investigated to generate a large volume of endotoxin-free clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles. Both single-station and six-station multidirectional pin-on-plate wear simulators were used to generate ultra-high-molecular-weight polyethylene wear particles under sterile and non-sterile conditions. Microbial contamination and endotoxin levels in the lubricants were determined. The results indicated that microbial contamination was absent and endotoxin levels were low and within acceptable limits for the pharmaceutical industry, when a six-station pin-on-plate wear simulator was used to generate ultra-high-molecular-weight polyethylene wear particles in a non-sterile environment. Different pore-sized polycarbonate filters were investigated to isolate nanometre-sized ultra-high-molecular-weight polyethylene wear particles from the wear test lubricants. The use of the filter sequence of 10, 1, 0.1, 0.1 and 0.015 µm pore sizes allowed successful isolation of ultra-high-molecular-weight polyethylene wear particles with a size range of < 100 nm, which was suitable for cell culture studies.

  16. Evaluation of Cloud-resolving and Limited Area Model Intercomparison Simulations using TWP-ICE Observations. Part 1: Deep Convective Updraft Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varble, A. C.; Zipser, Edward J.; Fridlind, Ann

    2014-12-27

    Ten 3D cloud-resolving model (CRM) simulations and four 3D limited area model (LAM) simulations of an intense mesoscale convective system observed on January 23-24, 2006 during the Tropical Warm Pool – International Cloud Experiment (TWP-ICE) are compared with each other and with observed radar reflectivity fields and dual-Doppler retrievals of vertical wind speeds in an attempt to explain published results showing a high bias in simulated convective radar reflectivity aloft. This high bias results from ice water content being large, which is a product of large, strong convective updrafts, although hydrometeor size distribution assumptions modulate the size of this bias.more » Snow reflectivity can exceed 40 dBZ in a two-moment scheme when a constant bulk density of 100 kg m-3 is used. Making snow mass more realistically proportional to area rather than volume should somewhat alleviate this problem. Graupel, unlike snow, produces high biased reflectivity in all simulations. This is associated with large amounts of liquid water above the freezing level in updraft cores. Peak vertical velocities in deep convective updrafts are greater than dual-Doppler retrieved values, especially in the upper troposphere. Freezing of large rainwater contents lofted above the freezing level in simulated updraft cores greatly contributes to these excessive upper tropospheric vertical velocities. Strong simulated updraft cores are nearly undiluted, with some showing supercell characteristics. Decreasing horizontal grid spacing from 900 meters to 100 meters weakens strong updrafts, but not enough to match observational retrievals. Therefore, overly intense simulated updrafts may partly be a product of interactions between convective dynamics, parameterized microphysics, and large-scale environmental biases that promote different convective modes and strengths than observed.« less

  17. Visualization for Molecular Dynamics Simulation of Gas and Metal Surface Interaction

    NASA Astrophysics Data System (ADS)

    Puzyrkov, D.; Polyakov, S.; Podryga, V.

    2016-02-01

    The development of methods, algorithms and applications for visualization of molecular dynamics simulation outputs is discussed. The visual analysis of the results of such calculations is a complex and actual problem especially in case of the large scale simulations. To solve this challenging task it is necessary to decide on: 1) what data parameters to render, 2) what type of visualization to choose, 3) what development tools to use. In the present work an attempt to answer these questions was made. For visualization it was offered to draw particles in the corresponding 3D coordinates and also their velocity vectors, trajectories and volume density in the form of isosurfaces or fog. We tested the way of post-processing and visualization based on the Python language with use of additional libraries. Also parallel software was developed that allows processing large volumes of data in the 3D regions of the examined system. This software gives the opportunity to achieve desired results that are obtained in parallel with the calculations, and at the end to collect discrete received frames into a video file. The software package "Enthought Mayavi2" was used as the tool for visualization. This visualization application gave us the opportunity to study the interaction of a gas with a metal surface and to closely observe the adsorption effect.

  18. Galaxy two-point covariance matrix estimation for next generation surveys

    NASA Astrophysics Data System (ADS)

    Howlett, Cullan; Percival, Will J.

    2017-12-01

    We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.

  19. Molecular simulation of dispersion and mechanical stability of organically modified layered silicates in polymer matrices

    NASA Astrophysics Data System (ADS)

    Fu, Yao-Tsung

    The experimental analysis of nanometer-scale separation processes and mechanical properties at buried interfaces in nanocomposites has remained difficult. We have employed molecular dynamics simulation in relation to available experimental data to alleviate such limitations and gain insight into the dispersion and mechanical stability of organically modified layered silicates in hydrophobic polymer matrices. We analyzed cleavage energies of various organically modified silicates as a function of the cation exchange capacity, surfactant head group chemistry, and chain length using MD simulations with the PCFF-PHYLLOSILICATE force field. The range of the cleavage energy is between 25 and 210 mJ/m2 upon the molecular structures and packing of surfactants. As a function of chain length, the cleavage energy indicates local minima for interlayer structures comprised of loosely packed layers of alkyl chains and local maxima for interlayer structures comprised of densely packed layers of alkyl chains between the layers. In addition, the distribution of cationic head groups between the layers in the equilibrium state determines whether large increases in cleavage energy due to Coulomb attraction. We have also examined mechanical bending and failure mechanisms of layered silicates on the nanometer scale using molecular dynamics simulation in comparison to a library of TEM data of polymer nanocomposites. We investigated the energy of single clay lamellae as a function of bending radius and different cation density. The layer energy increases particularly for bending radii below 20 nm and is largely independent of cation exchange capacity. The analysis of TEM images of agglomerated and exfoliated aluminosilicates of different CEC in polymer matrices at small volume fractions showed bending radii in excess of 100 nm due to free volumes in the polymer matrix. At a volume fraction >5%, however, bent clay layers were found with bending radii <20 nm and kinks as a failure mechanism in good agreement with simulation results. We have examined thermal conductivity of organically modified layered silicates using molecular dynamics simulation in comparison to experimental results by laser measurement. The thermal conductivity slightly increased from 0.08 to 0.14 Wm-1K-1 with increasing chain length, related to the gallery spacing and interlayer density of the organic material.

  20. Molecular dynamics simulation of diffusion of gases in a carbon-nanotube-polymer composite

    NASA Astrophysics Data System (ADS)

    Lim, Seong Y.; Sahimi, Muhammad; Tsotsis, Theodore T.; Kim, Nayong

    2007-07-01

    Extensive molecular dynamics (MD) simulations were carried out to compute the solubilities and self-diffusivities of CO2 and CH4 in amorphous polyetherimide (PEI) and mixed-matrix PEI generated by inserting single-walled carbon nanotubes into the polymer. Atomistic models of PEI and its composites were generated using energy minimizations, MD simulations, and the polymer-consistent force field. Two types of polymer composite were generated by inserting (7,0) and (12,0) zigzag carbon nanotubes into the PEI structure. The morphologies of PEI and its composites were characterized by their densities, radial distribution functions, and the accessible free volumes, which were computed with probe molecules of different sizes. The distributions of the cavity volumes were computed using the Voronoi tessellation method. The computed self-diffusivities of the gases in the polymer composites are much larger than those in pure PEI. We find, however, that the increase is not due to diffusion of the gases through the nanotubes which have smooth energy surfaces and, therefore, provide fast transport paths. Instead, the MD simulations indicate a squeezing effect of the nanotubes on the polymer matrix that changes the composite polymers’ free-volume distributions and makes them more sharply peaked. The presence of nanotubes also creates several cavities with large volumes that give rise to larger diffusivities in the polymer composites. This effect is due to the repulsive interactions between the polymer and the nanotubes. The solubilities of the gases in the polymer composites are also larger than those in pure PEI, hence indicating larger gas permeabilities for mixed-matrix PEI than PEI itself.

  1. Technical design and commissioning of the KATRIN large-volume air coil system

    NASA Astrophysics Data System (ADS)

    Erhard, M.; Behrens, J.; Bauer, S.; Beglarian, A.; Berendes, R.; Drexlin, G.; Glück, F.; Gumbsheimer, R.; Hergenhan, J.; Leiber, B.; Mertens, S.; Osipowicz, A.; Plischke, P.; Reich, J.; Thümmler, T.; Wandkowsky, N.; Weinheimer, C.; Wüstling, S.

    2018-02-01

    The KATRIN experiment is a next-generation direct neutrino mass experiment with a sensitivity of 0.2 eV (90% C.L.) to the effective mass of the electron neutrino. It measures the tritium β-decay spectrum close to its endpoint with a spectrometer based on the MAC-E filter technique. The β-decay electrons are guided by a magnetic field that operates in the mT range in the central spectrometer volume; it is fine-tuned by a large-volume air coil system surrounding the spectrometer vessel. The purpose of the system is to provide optimal transmission properties for signal electrons and to achieve efficient magnetic shielding against background. In this paper we describe the technical design of the air coil system, including its mechanical and electrical properties. We outline the importance of its versatile operation modes in background investigation and suppression techniques. We compare magnetic field measurements in the inner spectrometer volume during system commissioning with corresponding simulations, which allows to verify the system's functionality in fine-tuning the magnetic field configuration. This is of major importance for a successful neutrino mass measurement at KATRIN.

  2. Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers

    DOE PAGES

    Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...

    2016-01-28

    Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less

  3. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.

    PubMed

    Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D

    2007-06-21

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.

  4. Pre-compression volume on flow ripple reduction of a piston pump

    NASA Astrophysics Data System (ADS)

    Xu, Bing; Song, Yuechao; Yang, Huayong

    2013-11-01

    Axial piston pump with pre-compression volume(PCV) has lower flow ripple in large scale of operating condition than the traditional one. However, there is lack of precise simulation model of the axial piston pump with PCV, so the parameters of PCV are difficult to be determined. A finite element simulation model for piston pump with PCV is built by considering the piston movement, the fluid characteristic(including fluid compressibility and viscosity) and the leakage flow rate. Then a test of the pump flow ripple called the secondary source method is implemented to validate the simulation model. Thirdly, by comparing results among the simulation results, test results and results from other publications at the same operating condition, the simulation model is validated and used in optimizing the axial piston pump with PCV. According to the pump flow ripples obtained by the simulation model with different PCV parameters, the flow ripple is the smallest when the PCV angle is 13°, the PCV volume is 1.3×10-4 m3 at such operating condition that the pump suction pressure is 2 MPa, the pump delivery pressure 15 MPa, the pump speed 1 000 r/min, the swash plate angle 13°. At the same time, the flow ripple can be reduced when the pump suction pressure is 2 MPa, the pump delivery pressure is 5 MPa,15 MPa, 22 MPa, pump speed is 400 r/min, 1 000 r/min, 1 500 r/min, the swash plate angle is 11°, 13°, 15° and 17°, respectively. The finite element simulation model proposed provides a method for optimizing the PCV structure and guiding for designing a quieter axial piston pump.

  5. The electrostatic persistence length of polymers beyond the OSF limit.

    PubMed

    Everaers, R; Milchev, A; Yamakov, V

    2002-05-01

    We use large-scale Monte Carlo simulations to test scaling theories for the electrostatic persistence length l(e) of isolated, uniformly charged polymers with Debye-Hückel intrachain interactions in the limit where the screening length kappa(-1) exceeds the intrinsic persistence length of the chains. Our simulations cover a significantly larger part of the parameter space than previous studies. We observe no significant deviations from the prediction l(e) proportional to kappa(-2) by Khokhlov and Khachaturian which is based on applying the Odijk-Skolnick-Fixman theories of electrostatic bending rigidity and electrostatically excluded volume to the stretched de Gennes-Pincus-Velasco-Brochard polyelectrolyte blob chain. A linear or sublinear dependence of the persistence length on the screening length can be ruled out. We show that previous results pointing into this direction are due to a combination of excluded-volume and finite chain length effects. The paper emphasizes the role of scaling arguments in the development of useful representations for experimental and simulation data.

  6. Hot interstellar tunnels. 1: Simulation of interacting supernova remnants

    NASA Technical Reports Server (NTRS)

    Smith, B. W.

    1976-01-01

    The theory required to build a numerical simulation of interacting supernova remnants is developed. The hot cavities within a population of remnants will become connected, with varying ease and speed, for a variety of assumed conditions in the outer shells of old remnants. Apparently neither radiative cooling nor thermal conduction in a large-scale galactic magnetic field can destroy hot cavity regions, if they grow, faster than they are reheated by supernova shock waves, but interstellar mass motions disrupt the contiguity of extensive cavities necessary for the dispersal of these shocks over a wide volume. Monte Carlo simulations show that a quasi-equilibrium is reached in the test space within 10 million yrs of the first supernova and is characterized by an average cavity filling fraction of the interstellar volume. Aspects of this equilibrium are discussed for a range of supernova rates. Two predictions are not confirmed within this range: critical growth of hot regions to encompass the entire medium, and the efficient quenching of a remnant's expansion by interaction with other cavities.

  7. Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids

    NASA Astrophysics Data System (ADS)

    Sezer-Uzol, Nilay

    In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.

  8. Bidisperse and polydisperse suspension rheology at large solid fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pednekar, Sidhant; Chun, Jaehun; Morris, Jeffrey F.

    At the same solid volume fraction, bidisperse and polydisperse suspensions display lower viscosities, and weaker normal stress response, compared to monodisperse suspensions. The reduction of viscosity associated with size distribution can be explained by an increase of the maximum flowable, or jamming, solid fraction. In this work, concentrated or "dense" suspensions are simulated under strong shearing, where thermal motion and repulsive forces are negligible, but we allow for particle contact with a mild frictional interaction with interparticle friction coefficient of 0.2. Aspects of bidisperse suspension rheology are first revisited to establish that the approach reproduces established trends; the study ofmore » bidisperse suspensions at size ratios of large to small particle radii (2 to 4) shows that a minimum in the viscosity occurs for zeta slightly above 0.5, where zeta=phi_{large}/phi is the fraction of the total solid volume occupied by the large particles. The simple shear flows of polydisperse suspensions with truncated normal and log normal size distributions, and bidisperse suspensions which are statistically equivalent with these polydisperse cases up to third moment of the size distribution, are simulated and the rheologies are extracted. Prior work shows that such distributions with equivalent low-order moments have similar phi_{m}, and the rheological behaviors of normal, log normal and bidisperse cases are shown to be in close agreement for a wide range of standard deviation in particle size, with standard correlations which are functionally dependent on phi/phi_{m} providing excellent agreement with the rheology found in simulation. The close agreement of both viscosity and normal stress response between bi- and polydisperse suspensions demonstrates the controlling in influence of the maximum packing fraction in noncolloidal suspensions. Microstructural investigations and the stress distribution according to particle size are also presented.« less

  9. Perturbative approach to covariance matrix of the matter power spectrum

    DOE PAGES

    Mohammed, Irshad; Seljak, Uros; Vlah, Zvonimir

    2016-12-14

    Here, we evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (beat coupling or super-sample variance), and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10\\% level up tomore » $$k \\sim 1 h {\\rm Mpc^{-1}}$$. We also show that all the connected components are dominated by the large-scale modes ($$k<0.1 h {\\rm Mpc^{-1}}$$), regardless of the value of the wavevectors $$k,\\, k'$$ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher $k$ it is dominated by a single eigenmode. Furthermore, the full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.« less

  10. Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach

    NASA Technical Reports Server (NTRS)

    Menon, S.; Feiz, H.

    1990-01-01

    Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.

  11. Modeling the liquid filling in capillary well microplates for analyte preconcentration.

    PubMed

    Yu, Yang; Wang, Xuewei; Ng, Tuck Wah

    2012-06-15

    An attractive advantage of the capillary well microplate approach is the ability to conduct evaporative analyte preconcentration. We advance the use of hydrophobic materials for the wells which apart from reducing material loss through wetting also affords self entry into the well when the droplet size reduces below a critical value. Using Surface Evolver simulation without gravity, we find the critical diameters D(c) fitting very well with theoretical results. When simulating the critical diameters D(c)(G) with gravity included, the gravitational effect could only be ignored when the liquid volumes were small (difference of 5.7% with 5 μL of liquid), but not when the liquid volumes were large (differences of more than 22% with 50 μL of liquid). From this, we developed a modifying equation from a series of simulation results made to describe the gravitational effect. This modifying equation fitted the simulation results well in our simulation range (100°≤θ≤135° and 1 μL≤V≤200 μL). In simulating the condition of multiple wells underneath each droplet, we found that having more holes did not alter the critical diameters significantly. Consequently, the modifying relation should also generally express the critical diameter for multiple wells under a droplet. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.

  12. Effects of Aircraft Wake Dynamics on Measured and Simulated NO(x) and HO(x) Wake Chemistry. Appendix B

    NASA Technical Reports Server (NTRS)

    Lewellen, D. C.; Lewellen, W. S.

    2001-01-01

    High-resolution numerical large-eddy simulations of the near wake of a B757 including simplified NOx and HOx chemistry were performed to explore the effects of dynamics on chemistry in wakes of ages from a few seconds to several minutes. Dilution plays an important basic role in the NOx-O3 chemistry in the wake, while a more interesting interaction between the chemistry and dynamics occurs for the HOx species. These simulation results are compared with published measurements of OH and HO2 within a B757 wake under cruise conditions in the upper troposphere taken during the Subsonic Aircraft Contrail and Cloud Effects Special Study (SUCCESS) mission in May 1996. The simulation provides a much finer grained representation of the chemistry and dynamics of the early wake than is possible from the 1 s data samples taken in situ. The comparison suggests that the previously reported discrepancy of up to a factor of 20 - 50 between the SUCCESS measurements of the [HO2]/[OH] ratio and that predicted by simplified theoretical computations is due to the combined effects of large mixing rates around the wake plume edges and averaging over volumes containing large species fluctuations. The results demonstrate the feasibility of using three-dimensional unsteady large-eddy simulations with coupled chemistry to study such phenomena.

  13. Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane

    2017-06-02

    Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less

  14. Design and Construction of an Urban Runoff Research Facility

    PubMed Central

    Wherley, Benjamin G.; White, Richard H.; McInnes, Kevin J.; Fontanier, Charles H.; Thomas, James C.; Aitkenhead-Peterson, Jacqueline A.; Kelly, Steven T.

    2014-01-01

    As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2 facility was constructed which consists of 24 individual 33.6 m2 field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4-P, K+, Mg2+, and Ca2+ had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems. PMID:25146420

  15. A fully coupled method for massively parallel simulation of hydraulically driven fractures in 3-dimensions: FULLY COUPLED PARALLEL SIMULATION OF HYDRAULIC FRACTURES IN 3-D

    DOE PAGES

    Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...

    2016-09-18

    This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.

  16. A fully coupled method for massively parallel simulation of hydraulically driven fractures in 3-dimensions: FULLY COUPLED PARALLEL SIMULATION OF HYDRAULIC FRACTURES IN 3-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.

    This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.

  17. Visualization and Analysis for Near-Real-Time Decision Making in Distributed Workflows

    DOE PAGES

    Pugmire, David; Kress, James; Choi, Jong; ...

    2016-08-04

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  18. Three-Dimensional Simulation of Liquid Drop Dynamics Within Unsaturated Vertical Hele-Shaw Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hai Huang; Paul Meakin

    A three-dimensional, multiphase fluid flow model with volume of fluid-interface tracking was developed and applied to study the multiphase dynamics of moving liquid drops of different sizes within vertical Hele-Shaw cells. The simulated moving velocities are significantly different from those obtained from a first-order analytical approximation, based on simple force-balance concepts. The simulation results also indicate that the moving drops can exhibit a variety of shapes and that the transition among these different shapes is largely determined by the moving velocities. More important, there is a transition from a linear moving regime at small capillary numbers, in which the capillarymore » number scales linearly with the Bond number, to a nonlinear moving regime at large capillary numbers, in which the moving drop releases a train of droplets from its trailing edge. The train of droplets forms a variety of patterns at different moving velocities.« less

  19. The Strata-1 Regolith Dynamics Experiment: Class 1E Science on ISS

    NASA Technical Reports Server (NTRS)

    Fries, Marc; Graham, Lee; John, Kristen

    2016-01-01

    The Strata-1 experiment studies the evolution of small body regolith through long-duration exposure of simulant materials to the microgravity environment on the International Space Station (ISS). This study will record segregation and mechanical dynamics of regolith simulants in a microgravity and vibration environment similar to that experienced by regolith on small Solar System bodies. Strata-1 will help us understand regolith dynamics and will inform design and procedures for landing and setting anchors, safely sampling and moving material on asteroidal surfaces, processing large volumes of material for in situ resource utilization (ISRU) purposes, and, in general, predicting the behavior of large and small particles on disturbed asteroid surfaces. This experiment is providing new insights into small body surface evolution.

  20. Novel three-dimensional imaging volumetry in autosomal dominant polycystic kidney disease: comparison with 2D volumetry.

    PubMed

    Shin, Dongsuk; Lee, Kyu-Beck; Hyun, Young Youl; Lee, Young Rae; Hwang, Young-Hwan; Park, Hayne Cho; Ahn, Curie

    2014-08-01

    Autosomal dominant polycystic kidney disease (ADPKD) volumetry is an important marker for evaluating the progression of disease. Three-dimensional (3D) volumetry is generally more timesaving than 2D volumetry, but its reliability and accuracy are uncertain. Small and large phantoms simulating polycystic kidneys and 20 patients with ADPKD underwent magnetic resonance imaging (MRI) volumetry. We evaluated the total kidney volume (TKV) and total cyst volume (TCV) using a novel 3D volumetry program (XelisTM) and compared 3D volumetry data with the conventional 2D method (the reference volume values). After upload and threshold setting, the other organs surrounding the kidney were removed by picking and sculpting. The novel method involves drawing of the kidney or cyst and automatic measurement of kidney volume and cyst volume in 3D images. The 3D volume estimation of the small and large phantoms differed from the actual values by 6.9% and -8.2%, respectively, for TKV and by 2.1% and 1.4% for TCV. In ADPKD patients, the intra-reader reliability of 3D volumetry was 30 ± 180 mL (1.3 ± 10.3%) and 25 ± 113 mL (1.2 ± 9.4%), respectively, for TKV and TCV. Correlation between 3D volumetry and 2D volumetry of TKV and TCV resulted in a high correlation coefficient and a regression slope approaching 1.00 (r = 0.97 - 0.98). The mean of the volume percentage differences for 3D vs. 2D for TKV : TCV were -6.0 ± 8.9% : 2.0 ± 11.8% in large ADPKD and -16.1 ± 10.4% : 13.2 ± 21.9% in small ADPKD. Our study showed that 3D volumetry has reliability and accuracy compared with 2D volumetry in ADPKD. 3D volumetry is more accurate for TCV and large ADPKD.

  1. Marvel-ous Dwarfs: Results from Four Heroically Large Simulated Volumes of Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Munshi, Ferah; Brooks, Alyson; Weisz, Daniel; Bellovary, Jillian; Christensen, Charlotte

    2018-01-01

    We present results from high resolution, fully cosmological simulations of cosmic sheets that contain many dwarf galaxies. Together, they create the largest collection of simulated dwarf galaxies to date, with z=0 stellar masses comparable to the LMC or smaller. In total, we have simulated almost 100 luminous dwarf galaxies, forming a sample of simulated dwarfs which span a wide range of physical (stellar and halo mass) and evolutionary properties (merger history). We show how they can be calibrated against a wealth of observations of nearby galaxies including star formation histories, HI masses and kinematics, as well as stellar metallicities. We present preliminary results answering the following key questions: What is the slope of the stellar mass function at extremely low masses? Do halos with HI and no stars exist? What is the scatter in the stellar to halo mass relationship as a function of dwarf mass? What drives the scatter? With this large suite, we are beginning to statistically characterize dwarf galaxies and identify the types and numbers of outliers to expect.

  2. Precipitation-Runoff Modeling System (PRMS) and Streamflow Response to Spatially Distributed Precipitation in Two Large Watersheds in Northern California

    NASA Astrophysics Data System (ADS)

    Dhakal, A. S.; Adera, S.; Niswonger, R. G.; Gardner, M.

    2016-12-01

    The ability of the Precipitation-Runoff Modeling System (PRMS) to predict peak intensity, peak timing, base flow, and volume of streamflow was examined in Arroyo Hondo (180 km2) and Upper Alameda Creek (85 km2), two sub-watersheds of the Alameda Creek watershed in Northern California. Rainfall-runoff volume ratios vary widely, and can exceed 0.85 during mid-winter flashy rainstorm events. Due to dry antecedent soil moisture conditions, the first storms of the hydrologic year often produce smaller rainfall-runoff volume ratios. Runoff response in this watershed is highly hysteretic; large precipitation events are required to generate runoff following a 4-week period without precipitation. After about 150 mm of cumulative rainfall, streamflow responds quickly to subsequent storms, with variations depending on rainstorm intensity. Inputs to PRMS included precipitation, temperature, topography, vegetation, soils, and land cover data. The data was prepared for input into PRMS using a suite of data processing Python scripts written by the Desert Research Institute and U.S. Geological Survey. PRMS was calibrated by comparing simulated streamflow to measured streamflow at a daily time step during the period 1995 - 2014. The PRMS model is being used to better understand the different patterns of streamflow observed in the Alameda Creek watershed. Although Arroyo Hondo receives more rainfall than Upper Alameda Creek, it is not clear whether the differences in streamflow patterns are a result of differences in rainfall or other variables, such as geology, slope and aspect. We investigate the ability of PRMS to simulate daily streamflow in the two sub-watersheds for a variety of antecedent soil moisture conditions and rainfall intensities. After successful simulation of watershed runoff processes, the model will be expanded using GSFLOW to simulate integrated surface water and groundwater to support water resources planning and management in the Alameda Creek watershed.

  3. Titan2D simulations of dome-collapse pyroclastic flows for crisis assessments on Montserrat

    NASA Astrophysics Data System (ADS)

    Widiwijayanti, C.; Voight, B.; Hidayat, D.; Patra, A.; Pitman, E.

    2010-12-01

    The Soufriere Hills Volcano (SHV), Montserrat, has experienced numerous episodes of lava dome collapses since 1995. Collapse volumes range from small rockfalls to major dome collapses (as much as ~200 M m3). Problems arise in hazards mitigation, particularly in zoning for populated areas. Determining the likely extent of flowage deposits in various scenarios is important for hazards zonation, provision of advice by scientists, and decision making by public officials. Towards resolution of this issue we have tested the TITAN2D code, calibrated parameters for an SHV database, and using updated topography have provided flowage maps for various scenarios and volume classes from SHV, for use in hazards assessments. TITAN2D is a map plane (depth averaged) simulator of granular flow and yields mass distributions over a DEM. Two Coulomb frictional parameters (basal and internal frictions) and initial source conditions (volume, source location, and source geometry) of single or multiple pulses in a dome-collapse type event control behavior of the flow. Flow kinematics are captured, so that the dynamics of flow can be examined spatially from frame to frame, or as a movie. Our hazard maps include not only the final deposit, but also areas inundated by moving debris prior to deposition. Simulations from TITAN2D were important for analysis of crises in the period 2007-2010. They showed that any very large mass released on the north slope would be strongly partitioned by local topography, and thus it was doubtful that flows of very large size (>20 M m3) could be generated in the Belham River drainage. This partitioning effect limited runout toward populated areas. These effects were interpreted to greatly reduce the down-valley risk of ash-cloud surges.

  4. Bistability: Requirements on Cell-Volume, Protein Diffusion, and Thermodynamics

    PubMed Central

    Endres, Robert G.

    2015-01-01

    Bistability is considered wide-spread among bacteria and eukaryotic cells, useful e.g. for enzyme induction, bet hedging, and epigenetic switching. However, this phenomenon has mostly been described with deterministic dynamic or well-mixed stochastic models. Here, we map known biological bistable systems onto the well-characterized biochemical Schlögl model, using analytical calculations and stochastic spatiotemporal simulations. In addition to network architecture and strong thermodynamic driving away from equilibrium, we show that bistability requires fine-tuning towards small cell volumes (or compartments) and fast protein diffusion (well mixing). Bistability is thus fragile and hence may be restricted to small bacteria and eukaryotic nuclei, with switching triggered by volume changes during the cell cycle. For large volumes, single cells generally loose their ability for bistable switching and instead undergo a first-order phase transition. PMID:25874711

  5. Archaeal community structure in leachate and solid waste is correlated to methane generation and volume reduction during biodegradation of municipal solid waste.

    PubMed

    Fei, Xunchang; Zekkos, Dimitrios; Raskin, Lutgarde

    2015-02-01

    Duplicate carefully-characterized municipal solid waste (MSW) specimens were reconstituted with waste constituents obtained from a MSW landfill and biodegraded in large-scale landfill simulators for about a year. Repeatability and relationships between changes in physical, chemical, and microbial characteristics taking place during the biodegradation process were evaluated. Parameters such as rate of change of soluble chemical oxygen demand in the leachate (rsCOD), rate of methane generation (rCH4), rate of specimen volume reduction (rVt), DNA concentration in the leachate, and archaeal community structures in the leachate and solid waste were monitored during operation. The DNA concentration in the leachate was correlated to rCH4 and rVt. The rCH4 was related to rsCOD and rVt when waste biodegradation was intensive. The structures of archaeal communities in the leachate and solid waste of both simulators were very similar and Methanobacteriaceae were the dominant archaeal family throughout the testing period. Monitoring the chemical and microbial characteristics of the leachate was informative of the biodegradation process and volume reduction in the simulators, suggesting that leachate monitoring could be informative of the extent of biodegradation in a full-scale landfill. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Formulation and Validation of an Efficient Computational Model for a Dilute, Settling Suspension Undergoing Rotational Mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran

    Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less

  7. Dielectric resonator antenna for coupling to NV centers in diamond

    NASA Astrophysics Data System (ADS)

    Kapitanova, Polina; Soshenko, Vladimir; Vorobyov, Vadim; Dobrykh, Dmitry; Bolshedvorskiih, Stepan; Sorokin, Vadim; Akimov, Alexey

    2017-09-01

    Here we present the design of a dielectric resonator antenna for spin manipulation of large volume ensemble of nitrogen-vacancy centers in a bulk diamond. The proposed antenna design is based on a high permittivity hollow dielectric resonator excited by a symmetric microstrip loop. We present the result of numerical simulation of the magnetic field excited at the TE01δ mode of the dielectric resonator. We analyze the uniformity of the magnetic field in volume and discuss the possibility to use the antenna for efficient excitation of nitrogen-vacancy centers in whole commercially available sample.

  8. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  9. Hydrogen Epoch of Reinozation Array (HERA) Calibrated FFT Correlator Simulation

    NASA Astrophysics Data System (ADS)

    Salazar, Jeffrey David; Parsons, Aaron

    2018-01-01

    The Hydrogen Epoch of Reionization Array (HERA) project is an astronomical radio interferometer array with a redundant baseline configuration. Interferometer arrays are being used widely in radio astronomy because they have a variety of advantages over single antenna systems. For example, they produce images (visibilities) closely matching that of a large antenna (such as the Arecibo observatory), while both the hardware and maintenance costs are significantly lower. However, this method has some complications; one being the computational cost of correlating data from all of the antennas. A correlator is an electronic device that cross-correlates the data between the individual antennas; these are what radio astronomers call visibilities. HERA, being in its early stages, utilizes a traditional correlator system. The correlator cost scales as N2, where N is the number of antennas in the array. The purpose of a redundant baseline configuration array setup is for the use of a more efficient Fast Fourier Transform (FFT) correlator. FFT correlators scale as Nlog2N. The data acquired from this sort of setup, however, inherits geometric delay and uncalibrated antenna gains. This particular project simulates the process of calibrating signals from astronomical sources. Each signal “received” by an antenna in the simulation is given random antenna gain and geometric delay. The “linsolve” Python module was used to solve for the unknown variables in the simulation (complex gains and delays), which then gave a value for the true visibilities. This first version of the simulation only mimics a one dimensional redundant telescope array detecting a small amount of sources located in the volume above the antenna plane. Future versions, using GPUs, will handle a two dimensional redundant array of telescopes detecting a large amount of sources in the volume above the array.

  10. Cardiovascular effects of weightlessness and ground-based simulation

    NASA Technical Reports Server (NTRS)

    Sandler, Harold

    1988-01-01

    A large number of animal and human flight and ground-based studies were conducted to uncover the cardiovascular effects of weightlessness. Findings indicate changes in cardiovascular function during simulations and with spaceflight that lead to compromised function on reambulation and/or return to earth. This altered state termed cardiovascular deconditioning is most clearly manifest when in an erect body state. Hemodynamic parameters inidicate the presence of excessive tachnycardia, hypotension (leading to presyncope in one-third of the subjects), decreased heart volume, decreased plasma and circulating blood volumes and loss of skeletal muscle mass, particularly in the lower limbs. No clinically harmful effects were observed to date, but in-depth follow-ups were limited, as was available physiologic information. Available data concerning the causes for the observed changes indicate significant roles for mechanisms involved with body fluid-volume regulation, altered cardiac function, and the neurohumoral control of the control of the peripheral circulation. Satisfactory measures are not found. Return to preflight state was variable and only slightly dependent on flight duration. Future progress awaits availability of flight durations longer than several weeks.

  11. Galaxy clusters in local Universe simulations without density constraints: a long uphill struggle

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.

    2018-06-01

    Galaxy clusters are excellent cosmological probes provided that their formation and evolution within the large scale environment are precisely understood. Therefore studies with simulated galaxy clusters have flourished. However detailed comparisons between simulated and observed clusters and their population - the galaxies - are complicated by the diversity of clusters and their surrounding environment. An original way initiated by Bertschinger as early as 1987, to legitimize the one-to-one comparison exercise down to the details, is to produce simulations constrained to resemble the cluster under study within its large scale environment. Subsequently several methods have emerged to produce simulations that look like the local Universe. This paper highlights one of these methods and its essential steps to get simulations that not only resemble the local Large Scale Structure but also that host the local clusters. It includes a new modeling of the radial peculiar velocity uncertainties to remove the observed correlation between the decreases of the simulated cluster masses and of the amount of data used as constraints with the distance from us. This method has the particularity to use solely radial peculiar velocities as constraints: no additional density constraints are required to get local cluster simulacra. The new resulting simulations host dark matter halos that match the most prominent local clusters such as Coma. Zoom-in simulations of the latter and of a volume larger than the 30h-1 Mpc radius inner sphere become now possible to study local clusters and their effects. Mapping the local Sunyaev-Zel'dovich and Sachs-Wolfe effects can follow.

  12. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  13. Tetrahedral and polyhedral mesh evaluation for cerebral hemodynamic simulation--a comparison.

    PubMed

    Spiegel, Martin; Redel, Thomas; Zhang, Y; Struffert, Tobias; Hornegger, Joachim; Grossman, Robert G; Doerfler, Arnd; Karmonik, Christof

    2009-01-01

    Computational fluid dynamic (CFD) based on patient-specific medical imaging data has found widespread use for visualizing and quantifying hemodynamics in cerebrovascular disease such as cerebral aneurysms or stenotic vessels. This paper focuses on optimizing mesh parameters for CFD simulation of cerebral aneurysms. Valid blood flow simulations strongly depend on the mesh quality. Meshes with a coarse spatial resolution may lead to an inaccurate flow pattern. Meshes with a large number of elements will result in unnecessarily high computation time which is undesirable should CFD be used for planning in the interventional setting. Most CFD simulations reported for these vascular pathologies have used tetrahedral meshes. We illustrate the use of polyhedral volume elements in comparison to tetrahedral meshing on two different geometries, a sidewall aneurysm of the internal carotid artery and a basilar bifurcation aneurysm. The spatial mesh resolution ranges between 5,119 and 228,118 volume elements. The evaluation of the different meshes was based on the wall shear stress previously identified as a one possible parameter for assessing aneurysm growth. Polyhedral meshes showed better accuracy, lower memory demand, shorter computational speed and faster convergence behavior (on average 369 iterations less).

  14. Brownian dynamics simulations of lipid bilayer membrane with hydrodynamic interactions in LAMMPS

    NASA Astrophysics Data System (ADS)

    Fu, Szu-Pei; Young, Yuan-Nan; Peng, Zhangli; Yuan, Hongyan

    2016-11-01

    Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiencies have been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane (such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity). In this work we implement such interaction potential in LAMMPS to simulate large-scale lipid systems such as vesicles and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for red blood cell (RBC) dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. We focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with varying enclosed volume; 2. RBC shape transitions with different enclosed volume. This work is funded by NSF under Grant DMS-1222550.

  15. Brownian dynamics simulations of lipid bilayer membrane with hydrodynamic interactions in LAMMPS

    NASA Astrophysics Data System (ADS)

    Fu, Szu-Pei; Young, Yuan-Nan; Peng, Zhangli; Yuan, Hongyan

    Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiency has been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane (such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity). In this work we implement such interaction potential in LAMMPS to simulate large-scale lipid systems such as vesicles and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for red blood cell (RBC) dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. We focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with varying enclosed volume; 2. RBC shape transitions with different enclosed volume.

  16. Assessment of the Impacts of ACLS on the ISS Life Support System Using Dynamic Simulations in V-HAB

    NASA Technical Reports Server (NTRS)

    Putz, Daniel; Olthoff, Claas; Ewert, Michael; Anderson, Molly

    2016-01-01

    The Advanced Closed Loop System (ACLS) is currently under development by Airbus Defense and Space and is slated for launch to the International Space Station (ISS) in 2017. The addition of new hardware into an already complex system such as the ISS life support system (LSS) always poses operational risks. It is therefore important to understand the impacts ACLS will have on the existing systems to ensure smooth operations for the ISS. This analysis can be done by using dynamic computer simulations and one possible tool for such a simulation is the Virtual Habitat (V-HAB). Based on MATLAB, V-HAB has been under development at the Institute of Astronautics of the Technical University of Munich (TUM) since 2004 and in the past has been successfully used to simulate the ISS life support systems. The existing V-HAB ISS simulation model treated the interior volume of the space station as one large, ideally-stirred container. This model was improved to allow the calculation of the atmospheric composition inside individual modules of the ISS by splitting it into twelve distinct volumes. The virtual volumes are connected by a simulation of the inter-module ventilation flows. This allows for a combined simulation of the LSS hardware and the atmospheric composition aboard the ISS. A dynamic model of ACLS is added to the ISS Simulation and several different operating modes for both ACLS and the existing ISS life support systems are studied and the impacts of ACLS on the rest of the system are determined. The results suggest that the US, Russian and ACLS CO2 systems can operate at the same time without impeding each other. Furthermore, based on the results of this analysis, the US and ACLS Sabatier systems can be operated in parallel as well to a achieve a very low CO2 concentration in the cabin atmosphere.

  17. Assessment of the Impacts of ACLS on the ISS Life Support System using Dynamic Simulations in V-HAB

    NASA Technical Reports Server (NTRS)

    Puetz, Daniel; Olthoff, Claas; Ewert, Michael K.; Anderson, Molly S.

    2016-01-01

    The Advanced Closed Loop System (ACLS) is currently under development by Airbus Defense and Space and is slated for launch to the International Space Station (ISS) in 2017. The addition of new hardware into an already complex system such as the ISS life support system (LSS) always poses operational risks. It is therefore important to understand the impacts ACLS will have on the existing systems to ensure smooth operations for the ISS. This analysis can be done by using dynamic computer simulations and one possible tool for such a simulation is Virtual Habitat (V-HAB). Based on Matlab (Registered Trademark) V-HAB has been under development at the Institute of Astronautics of the Technical University Munich (TUM) since 2006 and in the past has been successfully used to simulate the ISS life support systems. The existing V-HAB ISS simulation model treated the interior volume of the space station as one large ideally-stirred container. This model was improved to allow the calculation of the atmospheric composition inside the individual modules of the ISS by splitting it into ten distinct volumes. The virtual volumes are connected by a simulation of the inter-module ventilation flows. This allows for a combined simulation of the LSS hardware and the atmospheric composition aboard the ISS. A dynamic model of ACLS is added to the ISS simulation and different operating modes for both ACLS and the existing ISS life support systems are studied to determine the impacts of ACLS on the rest of the system. The results suggest that the US, Russian and ACLS CO2 systems can operate at the same time without impeding each other. Furthermore, based on the results of this analysis, the US and ACLS Sabatier systems can be operated in parallel as well to achieve the highest possible CO2 recycling together with a low CO2 concentration.

  18. Materials Characterisation and Analysis for Flow Simulation of Liquid Resin Infusion

    NASA Astrophysics Data System (ADS)

    Sirtautas, J.; Pickett, A. K.; George, A.

    2015-06-01

    Liquid Resin Infusion (LRI) processes including VARI and VARTM have received increasing attention in recent years, particularly for infusion of large parts, or for low volume production. This method avoids the need for costly matched metal tooling as used in Resin Transfer Moulding (RTM) and can provide fast infusion if used in combination with flow media. Full material characterisation for LRI analysis requires models for three dimensional fabric permeability as a function of fibre volume content, fabric through-thickness compliance as a function of resin pressure, flow media permeability and resin viscosity. The characterisation of fabric relaxation during infusion is usually determined from cyclic compaction tests on saturated fabrics. This work presents an alternative method to determine the compressibility by using LRI flow simulation and fitting a model to experimental thickness measurements during LRI. The flow media is usually assumed to have isotropic permeability, but this work shows greater simulation accuracy from combining the flow media with separation plies as a combined orthotropic material. The permeability of this combined media can also be determined by fitting the model with simulation to LRI flow measurements. The constitutive models and the finite element solution were validated by simulation of the infusion of a complex aerospace demonstrator part.

  19. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less

  20. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    DOE PAGES

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; ...

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less

  1. An interpolation-free ALE scheme for unsteady inviscid flows computations with large boundary displacements over three-dimensional adaptive grids

    NASA Astrophysics Data System (ADS)

    Re, B.; Dobrzynski, C.; Guardone, A.

    2017-07-01

    A novel strategy to solve the finite volume discretization of the unsteady Euler equations within the Arbitrary Lagrangian-Eulerian framework over tetrahedral adaptive grids is proposed. The volume changes due to local mesh adaptation are treated as continuous deformations of the finite volumes and they are taken into account by adding fictitious numerical fluxes to the governing equation. This peculiar interpretation enables to avoid any explicit interpolation of the solution between different grids and to compute grid velocities so that the Geometric Conservation Law is automatically fulfilled also for connectivity changes. The solution on the new grid is obtained through standard ALE techniques, thus preserving the underlying scheme properties, such as conservativeness, stability and monotonicity. The adaptation procedure includes node insertion, node deletion, edge swapping and points relocation and it is exploited both to enhance grid quality after the boundary movement and to modify the grid spacing to increase solution accuracy. The presented approach is assessed by three-dimensional simulations of steady and unsteady flow fields. The capability of dealing with large boundary displacements is demonstrated by computing the flow around the translating infinite- and finite-span NACA 0012 wing moving through the domain at the flight speed. The proposed adaptive scheme is applied also to the simulation of a pitching infinite-span wing, where the bi-dimensional character of the flow is well reproduced despite the three-dimensional unstructured grid. Finally, the scheme is exploited in a piston-induced shock-tube problem to take into account simultaneously the large deformation of the domain and the shock wave. In all tests, mesh adaptation plays a crucial role.

  2. Experimental realization of a terahertz all-dielectric metasurface absorber.

    PubMed

    Liu, Xinyu; Fan, Kebin; Shadrivov, Ilya V; Padilla, Willie J

    2017-01-09

    Metamaterial absorbers consisting of metal, metal-dielectric, or dielectric materials have been realized across much of the electromagnetic spectrum and have demonstrated novel properties and applications. However, most absorbers utilize metals and thus are limited in applicability due to their low melting point, high Ohmic loss and high thermal conductivity. Other approaches rely on large dielectric structures and / or a supporting dielectric substrate as a loss mechanism, thereby realizing large absorption volumes. Here we present a terahertz (THz) all dielectric metasurface absorber based on hybrid dielectric waveguide resonances. We tune the metasurface geometry in order to overlap electric and magnetic dipole resonances at the same frequency, thus achieving an experimental absorption of 97.5%. A simulated dielectric metasurface achieves a total absorption coefficient enhancement factor of FT=140, with a small absorption volume. Our experimental results are well described by theory and simulations and not limited to the THz range, but may be extended to microwave, infrared and optical frequencies. The concept of an all-dielectric metasurface absorber offers a new route for control of the emission and absorption of electromagnetic radiation from surfaces with potential applications in energy harvesting, imaging, and sensing.

  3. Large eddy simulation of soot evolution in an aircraft combustor

    NASA Astrophysics Data System (ADS)

    Mueller, Michael E.; Pitsch, Heinz

    2013-11-01

    An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel-to-air ratio, the maximum soot volume fraction is found inside the recirculation zone; at the lower fuel-to-air ratio, turbulent fluctuations in the mixture fraction promote the oxidation of soot inside the recirculation zone and suppress the accumulation of a large soot volume fraction. Downstream, soot exits the combustor in intermittent fuel-rich pockets that are not mixed during the injection of dilution air and subsequent secondary fuel-lean combustion. At the higher fuel-to-air ratio, the frequency of these fuel-rich pockets is increased, leading to higher soot emissions from the combustor. Quantitatively, the soot emissions from the combustor are overpredicted by about 50%, which is a substantial improvement over previous works utilizing RANS to predict such emissions. In addition, the ratio between the two fuel-to-air ratios predicted by LES compares very favorably with the experimental measurements. Furthermore, soot growth is dominated by an acetylene-based pathway rather than an aromatic-based pathway, which is usually the dominant mechanism in nonpremixed flames. This finding is the result of the interactions between the hydrodynamics, mixing, chemistry, and soot in the recirculation zone and the resulting residence times of soot at various mixture fractions (compositions), which are not the same in this complex recirculating flow as in nonpremixed jet flames.

  4. The Sensitivity of West African Squall Line Water Budgets to Land Cover

    NASA Technical Reports Server (NTRS)

    Mohr, Karen I.; Baker, R. David; Tao, Wei-Kuo; Famiglietti, James S.; Starr, David OC. (Technical Monitor)

    2001-01-01

    This study used a two-dimensional coupled land/atmosphere (cloud-resolving) model to investigate the influence of land cover on the water budgets of squall lines in the Sahel. Study simulations used the same initial sounding and one of three different land covers, a sparsely vegetated semi-desert, a grassy savanna, and a dense evergreen broadleaf forest. All simulations began at midnight and ran for 24 hours to capture a full diurnal cycle. In the morning, the latent heat flux, boundary layer mixing ratio, and moist static energy in the boundary layer exhibited notable variations among the three land covers. The broadleaf forest had the highest latent heat flux, the shallowest, moistest, slowest growing boundary layer, and significantly more moist static energy per unit area than the savanna and semi-desert. Although all simulations produced squall lines by early afternoon, the broadleaf forest had the most intense, longest-lived squall lines with 29% more rainfall than the savanna and 37% more than the semi-desert. The sensitivity of the results to vegetation density, initial sounding humidity, and grid resolution was also assessed. There were greater differences in rainfall among land cover types than among simulations of the same land cover with varying amounts of vegetation. Small changes in humidity were equivalent in effect to large changes in land cover, producing large changes in the condensate and rainfall. Decreasing the humidity had a greater effect on rainfall volume than increasing the humidity. Reducing the grid resolution from 1.5 km to 0.5 km decreased the temperature and humidity of the cold pools and increased the rain volume.

  5. Volume I: Select Papers

    DTIC Science & Technology

    2010-08-01

    Pressurization Simulations ....................................................................................18  3.2  NVT Uniaxial Strain... Simulations .................................................................................26  3.3  Stacking Mismatch Simulations ...13  Figure 2. Pressure versus normalized volume. Circles are simulation results

  6. CFD simulation of an unbaffled stirred tank reactor driven by a magnetic rod: assessment of turbulence models.

    PubMed

    Li, Jiajia; Deng, Baoqing; Zhang, Bing; Shen, Xiuzhong; Kim, Chang Nyung

    2015-01-01

    A simulation of an unbaffled stirred tank reactor driven by a magnetic stirring rod was carried out in a moving reference frame. The free surface of unbaffled stirred tank was captured by Euler-Euler model coupled with the volume of fluid (VOF) method. The re-normalization group (RNG) k-ɛ model, large eddy simulation (LES) model and detached eddy simulation (DES) model were evaluated for simulating the flow field in the stirred tank. All turbulence models can reproduce the tangential velocity in an unbaffled stirred tank with a rotational speed of 150 rpm, 250 rpm and 400 rpm, respectively. Radial velocity is underpredicted by the three models. LES model and RNG k-ɛ model predict the better tangential velocity and axial velocity, respectively. RNG k-ɛ model is recommended for the simulation of the flow in an unbaffled stirred tank with magnetic rod due to its computational effort.

  7. INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING

    PubMed Central

    Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong

    2017-01-01

    Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363

  8. Bulk properties and near-critical behaviour of SiO2 fluid

    NASA Astrophysics Data System (ADS)

    Green, Eleanor C. R.; Artacho, Emilio; Connolly, James A. D.

    2018-06-01

    Rocky planets and satellites form through impact and accretion processes that often involve silicate fluids at extreme temperatures. First-principles molecular dynamics (FPMD) simulations have been used to investigate the bulk thermodynamic properties of SiO2 fluid at high temperatures (4000-6000 K) and low densities (500-2240 kg m-3), conditions which are relevant to protoplanetary disc condensation. Liquid SiO2 is highly networked at the upper end of this density range, but depolymerises with increasing temperature and volume, in a process characterised by the formation of oxygen-oxygen (Odbnd O) pairs. The onset of vaporisation is closely associated with the depolymerisation process, and is likely to be non-stoichiometric at high temperature, initiated via the exsolution of O2 molecules to leave a Si-enriched fluid. By 6000 K the simulated fluid is supercritical. A large anomaly in the constant-volume heat capacity occurs near the critical temperature. We present tabulated thermodynamic properties for silica fluid that reconcile observations from FPMD simulations with current knowledge of the SiO2 melting curve and experimental Hugoniot curves.

  9. Computational comparison of high and low viscosity micro-scale droplets splashing on a dry surface

    NASA Astrophysics Data System (ADS)

    Boelens, Arnout; Latka, Andrzej; de Pablo, Juan

    2015-11-01

    Depending on viscosity, a droplet splashing on a dry surface can splash immediately upon impact, a so called prompt splash, or after initially spreading on the surface, a late splash. One of the open questions in splashing is whether the mechanism behind both kinds of splashing is the same or not. Simulation results are presented comparing splashing of low viscosity ethanol with high viscosity silicone oil in air. The droplets are several hundred microns large. The simulations are 2D, and are performed using a Volume Of Fluid approach with a Finite Volume technique. The contact line is described using the Generalized Navier Boundary Condition. Both the gas phase and the liquid phase are assumed to be incompressible. The results of the simulations show good agreement with experiments. Observations that are reproduced include the effect of reduced ambient pressure suppressing splashing, and the details of liquid sheet formation and breakup. While the liquid sheet ejected in an early splash breaks up at its far edge, the liquid sheet ejected in a late splash breaks up close to the droplet.

  10. Pushing down the low-mass halo concentration frontier with the Lomonosov cosmological simulations

    NASA Astrophysics Data System (ADS)

    Pilipenko, Sergey V.; Sánchez-Conde, Miguel A.; Prada, Francisco; Yepes, Gustavo

    2017-12-01

    We introduce the Lomonosov suite of high-resolution N-body cosmological simulations covering a full box of size 32 h-1 Mpc with low-mass resolution particles (2 × 107 h-1 M⊙) and three zoom-in simulations of overdense, underdense and mean density regions at much higher particle resolution (4 × 104 h-1 M⊙). The main purpose of this simulation suite is to extend the concentration-mass relation of dark matter haloes down to masses below those typically available in large cosmological simulations. The three different density regions available at higher resolution provide a better understanding of the effect of the local environment on halo concentration, known to be potentially important for small simulation boxes and small halo masses. Yet, we find the correction to be small in comparison with the scatter of halo concentrations. We conclude that zoom simulations, despite their limited representativity of the volume of the Universe, can be effectively used for the measurement of halo concentrations at least at the halo masses probed by our simulations. In any case, after a precise characterization of this effect, we develop a robust technique to extrapolate the concentration values found in zoom simulations to larger volumes with greater accuracy. Altogether, Lomonosov provides a measure of the concentration-mass relation in the halo mass range 107-1010 h-1 M⊙ with superb halo statistics. This work represents a first important step to measure halo concentrations at intermediate, yet vastly unexplored halo mass scales, down to the smallest ones. All Lomonosov data and files are public for community's use.

  11. 3D conformal MRI-controlled transurethral ultrasound prostate therapy: validation of numerical simulations and demonstration in tissue-mimicking gel phantoms.

    PubMed

    Burtnyk, Mathieu; N'Djin, William Apoutou; Kobelevskiy, Ilya; Bronskill, Michael; Chopra, Rajiv

    2010-11-21

    MRI-controlled transurethral ultrasound therapy uses a linear array of transducer elements and active temperature feedback to create volumes of thermal coagulation shaped to predefined prostate geometries in 3D. The specific aims of this work were to demonstrate the accuracy and repeatability of producing large volumes of thermal coagulation (>10 cc) that conform to 3D human prostate shapes in a tissue-mimicking gel phantom, and to evaluate quantitatively the accuracy with which numerical simulations predict these 3D heating volumes under carefully controlled conditions. Eleven conformal 3D experiments were performed in a tissue-mimicking phantom within a 1.5T MR imager to obtain non-invasive temperature measurements during heating. Temperature feedback was used to control the rotation rate and ultrasound power of transurethral devices with up to five 3.5 × 5 mm active transducer elements. Heating patterns shaped to human prostate geometries were generated using devices operating at 4.7 or 8.0 MHz with surface acoustic intensities of up to 10 W cm(-2). Simulations were informed by transducer surface velocity measurements acquired with a scanning laser vibrometer enabling improved calculations of the acoustic pressure distribution in a gel phantom. Temperature dynamics were determined according to a FDTD solution to Pennes' BHTE. The 3D heating patterns produced in vitro were shaped very accurately to the prostate target volumes, within the spatial resolution of the MRI thermometry images. The volume of the treatment difference falling outside ± 1 mm of the target boundary was, on average, 0.21 cc or 1.5% of the prostate volume. The numerical simulations predicted the extent and shape of the coagulation boundary produced in gel to within (mean ± stdev [min, max]): 0.5 ± 0.4 [-1.0, 2.1] and -0.05 ± 0.4 [-1.2, 1.4] mm for the treatments at 4.7 and 8.0 MHz, respectively. The temperatures across all MRI thermometry images were predicted within -0.3 ± 1.6 °C and 0.1 ± 0.6 °C, inside and outside the prostate respectively, and the treatment time to within 6.8 min. The simulations also showed excellent agreement in regions of sharp temperature gradients near the transurethral and endorectal cooling devices. Conformal 3D volumes of thermal coagulation can be precisely matched to prostate shapes with transurethral ultrasound devices and active MRI temperature feedback. The accuracy of numerical simulations for MRI-controlled transurethral ultrasound prostate therapy was validated experimentally, reinforcing their utility as an effective treatment planning tool.

  12. Numerical simulation and optimization of casting process for complex pump

    NASA Astrophysics Data System (ADS)

    Liu, Xueqin; Dong, Anping; Wang, Donghong; Lu, Yanling; Zhu, Guoliang

    2017-09-01

    The complex shape of the casting pump body has large complicated structure and uniform wall thickness, which easy give rise to casting defects. The numerical simulation software ProCAST is used to simulate the initial top gating process, after analysis of the material and structure characteristics of the high-pressure pump. The filling process was overall smooth, not there the water shortage phenomenon. But the circular shrinkage defects appear at the bottom of casting during solidification process. Then, the casting parameters were optimized and adding cold iron in the bottom. The shrinkage weight was reduced from 0.00167g to 0.0005g. The porosity volume was reduced from 1.39cm3 to 0.41cm3. The optimization scheme is simulated and actual experimented. The defect has been significantly improved.

  13. Extraction and LOD control of colored interval volumes

    NASA Astrophysics Data System (ADS)

    Miyamura, Hiroko N.; Takeshima, Yuriko; Fujishiro, Issei; Saito, Takafumi

    2005-03-01

    Interval volume serves as a generalized isosurface and represents a three-dimensional subvolume for which the associated scalar filed values lie within a user-specified closed interval. In general, it is not an easy task for novices to specify the scalar field interval corresponding to their ROIs. In order to extract interval volumes from which desirable geometric features can be mined effectively, we propose a suggestive technique which extracts interval volumes automatically based on the global examination of the field contrast structure. Also proposed here is a simplification scheme for decimating resultant triangle patches to realize efficient transmission and rendition of large-scale interval volumes. Color distributions as well as geometric features are taken into account to select best edges to be collapsed. In addition, when a user wants to selectively display and analyze the original dataset, the simplified dataset is restructured to the original quality. Several simulated and acquired datasets are used to demonstrate the effectiveness of the present methods.

  14. Scaling Theory of Entanglement at the Many-Body Localization Transition.

    PubMed

    Dumitrescu, Philipp T; Vasseur, Romain; Potter, Andrew C

    2017-09-15

    We study the universal properties of eigenstate entanglement entropy across the transition between many-body localized (MBL) and thermal phases. We develop an improved real space renormalization group approach that enables numerical simulation of large system sizes and systematic extrapolation to the infinite system size limit. For systems smaller than the correlation length, the average entanglement follows a subthermal volume law, whose coefficient is a universal scaling function. The full distribution of entanglement follows a universal scaling form, and exhibits a bimodal structure that produces universal subleading power-law corrections to the leading volume law. For systems larger than the correlation length, the short interval entanglement exhibits a discontinuous jump at the transition from fully thermal volume law on the thermal side, to pure area law on the MBL side.

  15. A Novel Multi-scale Simulation Strategy for Turbulent Reacting Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Sutherland C.

    In this project, a new methodology was proposed to bridge the gap between Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES). This novel methodology, titled Lattice-Based Multiscale Simulation (LBMS), creates a lattice structure of One-Dimensional Turbulence (ODT) models. This model has been shown to capture turbulent combustion with high fidelity by fully resolving interactions between turbulence and diffusion. By creating a lattice of ODT models, which are then coupled, LBMS overcomes the shortcomings of ODT, which are its inability to capture large scale three dimensional flow structures. However, by spacing these lattices significantly apart, LBMS can avoid the cursemore » of dimensionality that creates untenable computational costs associated with DNS. This project has shown that LBMS is capable of reproducing statistics of isotropic turbulent flows while coarsening the spacing between lines significantly. It also investigates and resolves issues that arise when coupling ODT lines, such as flux reconstruction perpendicular to a given ODT line, preservation of conserved quantities when eddies cross a course cell volume and boundary condition application. Robust parallelization is also investigated.« less

  16. Magnetotelluric Detection Thresholds as a Function of Leakage Plume Depth, TDS and Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X.; Buscheck, T. A.; Mansoor, K.

    We conducted a synthetic magnetotelluric (MT) data analysis to establish a set of specific thresholds of plume depth, TDS concentration and volume for detection of brine and CO 2 leakage from legacy wells into shallow aquifers in support of Strategic Monitoring Subtask 4.1 of the US DOE National Risk Assessment Partnership (NRAP Phase II), which is to develop geophysical forward modeling tools. 900 synthetic MT data sets span 9 plume depths, 10 TDS concentrations and 10 plume volumes. The monitoring protocol consisted of 10 MT stations in a 2×5 grid laid out along the flow direction. We model the MTmore » response in the audio frequency range of 1 Hz to 10 kHz with a 50 Ωm baseline resistivity and the maximum depth up to 2000 m. Scatter plots show the MT detection thresholds for a trio of plume depth, TDS concentration and volume. Plumes with a large volume and high TDS located at a shallow depth produce a strong MT signal. We demonstrate that the MT method with surface based sensors can detect a brine and CO 2 plume so long as the plume depth, TDS concentration and volume are above the thresholds. However, it is unlikely to detect a plume at a depth larger than 1000 m with the change of TDS concentration smaller than 10%. Simulated aquifer impact data based on the Kimberlina site provides a more realistic view of the leakage plume distribution than rectangular synthetic plumes in this sensitivity study, and it will be used to estimate MT responses over simulated brine and CO 2 plumes and to evaluate the leakage detectability. Integration of the simulated aquifer impact data and the MT method into the NRAP DREAM tool may provide an optimized MT survey configuration for MT data collection. This study presents a viable approach for sensitivity study of geophysical monitoring methods for leakage detection. The results come in handy for rapid assessment of leakage detectability.« less

  17. Idealised large-eddy-simulation of thermally driven flows over an isolated mountain range with multiple ridges

    NASA Astrophysics Data System (ADS)

    Lang, Moritz N.; Gohm, Alexander; Wagner, Johannes S.; Leukauf, Daniel; Posch, Christian

    2014-05-01

    Two dimensional idealised large-eddy-simulations are performed using the WRF model to investigate thermally driven flows during the daytime over complex terrain. Both the upslope flows and the temporal evolution of the boundary layer structure are studied with a constant surface heat flux forcing of 150 W m-2. In order to distinguish between different heating processes the flow is Reynold decomposed into its mean and turbulent part. The heating processes associated with the mean flow are a cooling through cold-air advection along the slopes and subsidence warming within the valleys. The turbulent component causes bottom-up heating near the ground leading to a convective boundary layer (CBL) inside the valleys. Overshooting potentially colder thermals cool the stably stratified valley atmosphere above the CBL. Compared to recent investigations (Schmidli 2013, J. Atmos. Sci., Vol. 70, No. 12: pp. 4041-4066; Wagner et al. 2014, manuscript submitted to Mon. Wea. Rev.), which used an idealised topography with two parallel mountain crests separated by a straight valley, this project focuses on multiple, periodic ridges and valleys within an isolated mountain range. The impact of different numbers of ridges on the flow structure is compared with the sinusoidal envelope-topography. The present simulations show an interaction between the smaller-scale upslope winds within the different valleys and the large-scale flow of the superimposed mountain-plain wind circulation. Despite a smaller boundary layer air volume in the envelope case compared to the multiple ridges case the volume averaged heating rates are comparable. The reason is a stronger advection-induced cooling along the slopes and a weaker warming through subsidence at the envelope-topography compared to the mountain range with multiple ridges.

  18. Study of helicopterroll control effectiveness criteria

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Bourne, Simon M.; Curtiss, Howard C., Jr.; Hindson, William S.; Hess, Ronald A.

    1986-01-01

    A study of helicopter roll control effectiveness based on closed-loop task performance measurement and modeling is presented. Roll control critieria are based on task margin, the excess of vehicle task performance capability over the pilot's task performance demand. Appropriate helicopter roll axis dynamic models are defined for use with analytic models for task performance. Both near-earth and up-and-away large-amplitude maneuvering phases are considered. The results of in-flight and moving-base simulation measurements are presented to support the roll control effectiveness criteria offered. This Volume contains the theoretical analysis, simulation results and criteria development.

  19. Subgrid or Reynolds stress-modeling for three-dimensional turbulence computations

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.

    1975-01-01

    A review is given of recent advances in two distinct computational methods for evaluating turbulence fields, namely, statistical Reynolds stress modeling and turbulence simulation, where large eddies are followed in time. It is shown that evaluation of the mean Reynolds stresses, rather than use of a scalar eddy viscosity, permits an explanation of streamline curvature effects found in several experiments. Turbulence simulation, with a new volume averaging technique and third-order accurate finite-difference computing is shown to predict the decay of isotropic turbulence in incompressible flow with rather modest computer storage requirements, even at Reynolds numbers of aerodynamic interest.

  20. Geant4-DNA track-structure simulations for gold nanoparticles: The importance of electron discrete models in nanometer volumes.

    PubMed

    Sakata, Dousatsu; Kyriakou, Ioanna; Okada, Shogo; Tran, Hoang N; Lampe, Nathanael; Guatelli, Susanna; Bordage, Marie-Claude; Ivanchenko, Vladimir; Murakami, Koichi; Sasaki, Takashi; Emfietzoglou, Dimitris; Incerti, Sebastien

    2018-05-01

    Gold nanoparticles (GNPs) are known to enhance the absorbed dose in their vicinity following photon-based irradiation. To investigate the therapeutic effectiveness of GNPs, previous Monte Carlo simulation studies have explored GNP dose enhancement using mostly condensed-history models. However, in general, such models are suitable for macroscopic volumes and for electron energies above a few hundred electron volts. We have recently developed, for the Geant4-DNA extension of the Geant4 Monte Carlo simulation toolkit, discrete physics models for electron transport in gold which include the description of the full atomic de-excitation cascade. These models allow event-by-event simulation of electron tracks in gold down to 10 eV. The present work describes how such specialized physics models impact simulation-based studies on GNP-radioenhancement in a context of x-ray radiotherapy. The new discrete physics models are compared to the Geant4 Penelope and Livermore condensed-history models, which are being widely used for simulation-based NP radioenhancement studies. An ad hoc Geant4 simulation application has been developed to calculate the absorbed dose in liquid water around a GNP and its radioenhancement, caused by secondary particles emitted from the GNP itself, when irradiated with a monoenergetic electron beam. The effect of the new physics models is also quantified in the calculation of secondary particle spectra, when originating in the GNP and when exiting from it. The new physics models show similar backscattering coefficients with the existing Geant4 Livermore and Penelope models in large volumes for 100 keV incident electrons. However, in submicron sized volumes, only the discrete models describe the high backscattering that should still be present around GNPs at these length scales. Sizeable differences (mostly above a factor of 2) are also found in the radial distribution of absorbed dose and secondary particles between the new and the existing Geant4 models. The degree to which these differences are due to intrinsic limitations of the condensed-history models or to differences in the underling scattering cross sections requires further investigation. Improved physics models for gold are necessary to better model the impact of GNPs in radiotherapy via Monte Carlo simulations. We implemented discrete electron transport models for gold in Geant4 that is applicable down to 10 eV including the modeling of the full de-excitation cascade. It is demonstrated that the new model has a significant positive impact on particle transport simulations in gold volumes with submicron dimensions compared to the existing Livermore and Penelope condensed-history models of Geant4. © 2018 American Association of Physicists in Medicine.

  1. A large-eddy simulation study of wake propagation and power production in an array of tidal-current turbines.

    PubMed

    Churchfield, Matthew J; Li, Ye; Moriarty, Patrick J

    2013-02-28

    This paper presents our initial work in performing large-eddy simulations of tidal turbine array flows. First, a horizontally periodic precursor simulation is performed to create turbulent flow data. Then those data are used as inflow into a tidal turbine array two rows deep and infinitely wide. The turbines are modelled using rotating actuator lines, and the finite-volume method is used to solve the governing equations. In studying the wakes created by the turbines, we observed that the vertical shear of the inflow combined with wake rotation causes lateral wake asymmetry. Also, various turbine configurations are simulated, and the total power production relative to isolated turbines is examined. We found that staggering consecutive rows of turbines in the simulated configurations allows the greatest efficiency using the least downstream row spacing. Counter-rotating consecutive downstream turbines in a non-staggered array shows a small benefit. This work has identified areas for improvement. For example, using a larger precursor domain would better capture elongated turbulent structures, and including salinity and temperature equations would account for density stratification and its effect on turbulence. Additionally, the wall shear stress modelling could be improved, and more array configurations could be examined.

  2. Automation Applications in an Advanced Air Traffic Management System : Volume 5B. DELTA Simulation Model - Programmer's Guide.

    DOT National Transportation Integrated Search

    1974-08-01

    Volume 5 describes the DELTA Simulation Model. It includes all documentation of the DELTA (Determine Effective Levels of Task Automation) computer simulation developed by TRW for use in the Automation Applications Study. Volume 5A includes a user's m...

  3. RF Systems in Space. Volume I. Space Antennas Frequency (SARF) Simulation.

    DTIC Science & Technology

    1983-04-01

    lens SBR designs were investigated. The survivability of an SBR system was analyzed. The design of ground based SBR validation experiments for large...aperture SBR concepts were investigated. SBR designs were investigated for ground target detection. N1’IS GRAMI DTIC TAB E Unannounced E Justificat... designs :~~.~...: .-..:. ->.. - . *.* . ..- . . .. . -. . ..- . .4. To analyze the survivability of space radar 5. To design ground-based validation

  4. Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications

    DTIC Science & Technology

    2016-10-17

    finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16

  5. Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition

    NASA Astrophysics Data System (ADS)

    Daluge, D. R.; Ruedger, W. H.

    1981-06-01

    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered.

  6. Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies

    NASA Astrophysics Data System (ADS)

    Yang, Jun

    2000-12-01

    Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.

  7. Simulating Astrophysical Jets with Inertial Confinement Fusion Machines

    NASA Astrophysics Data System (ADS)

    Blue, Brent

    2005-10-01

    Large-scale directional outflows of supersonic plasma, also known as `jets', are ubiquitous phenomena in astrophysics. The traditional approach to understanding such phenomena is through theoretical analysis and numerical simulations. However, theoretical analysis might not capture all the relevant physics and numerical simulations have limited resolution and fail to scale correctly in Reynolds number and perhaps other key dimensionless parameters. Recent advances in high energy density physics using large inertial confinement fusion devices now allow controlled laboratory experiments on macroscopic volumes of plasma of direct relevance to astrophysics. This talk will present an overview of these facilities as well as results from current laboratory astrophysics experiments designed to study hydrodynamic jets and Rayleigh-Taylor mixing. This work is performed under the auspices of the U. S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48, Los Alamos National Laboratory under Contract No. W-7405-ENG-36, and the Laboratory for Laser Energetics under Contract No. DE-FC03-92SF19460.

  8. Predicting viscous-range velocity gradient dynamics in large-eddy simulations of turbulence

    NASA Astrophysics Data System (ADS)

    Johnson, Perry; Meneveau, Charles

    2017-11-01

    The details of small-scale turbulence are not directly accessible in large-eddy simulations (LES), posing a modeling challenge because many important micro-physical processes depend strongly on the dynamics of turbulence in the viscous range. Here, we introduce a method for coupling existing stochastic models for the Lagrangian evolution of the velocity gradient tensor with LES to simulate unresolved dynamics. The proposed approach is implemented in LES of turbulent channel flow and detailed comparisons with DNS are carried out. An application to modeling the fate of deformable, small (sub-Kolmogorov) droplets at negligible Stokes number and low volume fraction with one-way coupling is carried out. These results illustrate the ability of the proposed model to predict the influence of small scale turbulence on droplet micro-physics in the context of LES. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.

  9. Large eddy simulation modeling of particle-laden flows in complex terrain

    NASA Astrophysics Data System (ADS)

    Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.

    2017-12-01

    The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.

  10. Effects of voxelization on dose volume histogram accuracy

    NASA Astrophysics Data System (ADS)

    Sunderland, Kyle; Pinter, Csaba; Lasso, Andras; Fichtinger, Gabor

    2016-03-01

    PURPOSE: In radiotherapy treatment planning systems, structures of interest such as targets and organs at risk are stored as 2D contours on evenly spaced planes. In order to be used in various algorithms, contours must be converted into binary labelmap volumes using voxelization. The voxelization process results in lost information, which has little effect on the volume of large structures, but has significant impact on small structures, which contain few voxels. Volume differences for segmented structures affects metrics such as dose volume histograms (DVH), which are used for treatment planning. Our goal is to evaluate the impact of voxelization on segmented structures, as well as how factors like voxel size affects metrics, such as DVH. METHODS: We create a series of implicit functions, which represent simulated structures. These structures are sampled at varying resolutions, and compared to labelmaps with high sub-millimeter resolutions. We generate DVH and evaluate voxelization error for the same structures at different resolutions by calculating the agreement acceptance percentage between the DVH. RESULTS: We implemented tools for analysis as modules in the SlicerRT toolkit based on the 3D Slicer platform. We found that there were large DVH variation from the baseline for small structures or for structures located in regions with a high dose gradient, potentially leading to the creation of suboptimal treatment plans. CONCLUSION: This work demonstrates that labelmap and dose volume voxel size is an important factor in DVH accuracy, which must be accounted for in order to ensure the development of accurate treatment plans.

  11. Compactified cosmological simulations of the infinite universe

    NASA Astrophysics Data System (ADS)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-06-01

    We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.

  12. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  13. Study of Permanent Magnet Focusing for Astronomical Camera Tubes

    NASA Technical Reports Server (NTRS)

    Long, D. C.; Lowrance, J. L.

    1975-01-01

    A design is developed of a permanent magnet assembly (PMA) useful as the magnetic focusing unit for the 35 and 70 mm (diagonal) format SEC tubes. Detailed PMA designs for both tubes are given, and all data on their magnetic configuration, size, weight, and structure of magnetic shields adequate to screen the camera tube from the earth's magnetic field are presented. A digital computer is used for the PMA design simulations, and the expected operational performance of the PMA is ascertained through the calculation of a series of photoelectron trajectories. A large volume where the magnetic field uniformity is greater than 0.5% appears obtainable, and the point spread function (PSF) and modulation transfer function(MTF) indicate nearly ideal performance. The MTF at 20 cycles per mm exceeds 90%. The weight and volume appear tractable for the large space telescope and ground based application.

  14. Simulations of the Formation and Evolution of X-ray Clusters

    NASA Astrophysics Data System (ADS)

    Bryan, G. L.; Klypin, A.; Norman, M. L.

    1994-05-01

    We describe results from a set of Omega = 1 Cold plus Hot Dark Matter (CHDM) and Cold Dark Matter (CDM) simulations. We examine the formation and evolution of X-ray clusters in a cosmological setting with sufficient numbers to perform statistical analysis. We find that CDM, normalized to COBE, seems to produce too many large clusters, both in terms of the luminosity (dn/dL) and temperature (dn/dT) functions. The CHDM simulation produces fewer clusters and the temperature distribution (our numerically most secure result) matches observations where they overlap. The computed cluster luminosity function drops below observations, but we are almost surely underestimating the X-ray luminosity. Because of the lower fluctuations in CHDM, there are only a small number of bright clusters in our simulation volume; however we can use the simulated clusters to fix the relation between temperature and velocity dispersion, allowing us to use collisionless N-body codes to probe larger length scales with correspondingly brighter clusters. The hydrodynamic simulations have been performed with a hybrid particle-mesh scheme for the dark matter and a high resolution grid-based piecewise parabolic method for the adiabatic gas dynamics. This combination has been implemented for massively parallel computers, allowing us to achive grids as large as 512(3) .

  15. Development of large volume double ring penning plasma discharge source for efficient light emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd{sub 2}Fe{sub 14}B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic fieldmore » of {approx}0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is {approx}2 Multiplication-Sign 10{sup 11} cm{sup -3}, which is around one order of magnitude larger than that of single ring arrangement.« less

  16. Development of large volume double ring penning plasma discharge source for efficient light emissions.

    PubMed

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj; Prajapati, Jitendra; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana

    2012-12-01

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd(2)Fe(14)B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic field of ~0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is ~2 × 10(11) cm(-3), which is around one order of magnitude larger than that of single ring arrangement.

  17. A finite-volume ELLAM for three-dimensional solute-transport modeling

    USGS Publications Warehouse

    Russell, T.F.; Heberton, C.I.; Konikow, Leonard F.; Hornberger, G.Z.

    2003-01-01

    A three-dimensional finite-volume ELLAM method has been developed, tested, and successfully implemented as part of the U.S. Geological Survey (USGS) MODFLOW-2000 ground water modeling package. It is included as a solver option for the Ground Water Transport process. The FVELLAM uses space-time finite volumes oriented along the streamlines of the flow field to solve an integral form of the solute-transport equation, thus combining local and global mass conservation with the advantages of Eulerian-Lagrangian characteristic methods. The USGS FVELLAM code simulates solute transport in flowing ground water for a single dissolved solute constituent and represents the processes of advective transport, hydrodynamic dispersion, mixing from fluid sources, retardation, and decay. Implicit time discretization of the dispersive and source/sink terms is combined with a Lagrangian treatment of advection, in which forward tracking moves mass to the new time level, distributing mass among destination cells using approximate indicator functions. This allows the use of large transport time increments (large Courant numbers) with accurate results, even for advection-dominated systems (large Peclet numbers). Four test cases, including comparisons with analytical solutions and benchmarking against other numerical codes, are presented that indicate that the FVELLAM can usually yield excellent results, even if relatively few transport time steps are used, although the quality of the results is problem-dependent.

  18. Flow and Transport in Highly Heterogeneous Porous Formations: Numerical Experiments Performed Using the Analytic Element Method

    NASA Astrophysics Data System (ADS)

    Jankovic, I.

    2002-05-01

    Flow and transport in porous formations are analyzed using numerical simulations. Hydraulic conductivity is treated as a spatial random function characterized by a probability density function and a two-point covariance function. Simulations are performed for a multi-indicator conductivity structure developed by Gedeon Dagan (personal communication). This conductivity structure contains inhomogeneities (inclusions) of elliptical and ellipsoidal geometry that are embedded in a homogeneous background. By varying the distribution of sizes and conductivities of inclusions, any probability density function and two-point covariance may be reproduced. The multi-indicator structure is selected since it yields simple approximate transport solutions (Aldo Fiori, personal communication) and accurate numerical solutions (based on the Analytic Element Method). The dispersion is examined for two conceptual models. Both models are based on the multi-indicator conductivity structure. The first model is designed to examine dispersion in aquifers with continuously varying conductivity. The inclusions in this model cover as much area/volume of the porous formation as possible. The second model is designed for aquifers that contain clay/sand/gravel lenses embedded in otherwise homogeneous background. The dispersion in both aquifer types is simulated numerically. Simulation results are compared to those obtained using simple approximate solutions. In order to infer transport statistics that are representative of an infinite domain using the numerical experiments, the inclusions are placed in a domain that was shaped as a large ellipse (2D) and a large spheroid (3D) that were submerged in an unbounded homogeneous medium. On a large scale, the large body of inclusions behaves like a single large inhomogeneity. The analytic solution for a uniform flow past the single inhomogeneity of such geometry yields uniform velocity inside the domain. The velocity differs from that at infinity and can be used to infer the effective conductivity of the medium. As many as 100,000 inhomogeneities are placed inside the domain for 2D simulations. Simulations in 3D were limited to 50,000 inclusions. A large number of simulations was conducted on a massively parallel supercomputer cluster at the Center for Computational Research, University at Buffalo. Simulations range from mildly heterogeneous formations to highly heterogeneous formations (variance of the logarithm of conductivity equal to 10) and from sparsely populated systems to systems where inhomogeneities cover 95% of the volume. Particles are released and tracked inside the core of constant mean velocity. Following the particle tracking, various medium, flow, and transport statistics are computed. These include: spatial moments of particle positions, probability density function of hydraulic conductivity and each component of velocity, their two-point covariance function in the direction of flow and normal to it, covariance of Lagrangean velocities, and probability density function of travel times to various break-through locations. Following the analytic nature of the flow solution, all the results are presented in dimensionless forms. For example, the dispersion coefficients are made dimensionless with respect to the mean velocity and size of inhomogeneities. Detailed results will be presented and compared to well known first-order results and the results that are based on simple approximate transport solutions of Aldo Fiori.

  19. The cavitation erosion of ultrasonic sonotrode during large-scale metallic casting: Experiment and simulation.

    PubMed

    Tian, Yang; Liu, Zhilin; Li, Xiaoqian; Zhang, Lihua; Li, Ruiqing; Jiang, Ripeng; Dong, Fang

    2018-05-01

    Ultrasonic sonotrodes play an essential role in transmitting power ultrasound into the large-scale metallic casting. However, cavitation erosion considerably impairs the in-service performance of ultrasonic sonotrodes, leading to marginal microstructural refinement. In this work, the cavitation erosion behaviour of ultrasonic sonotrodes in large-scale castings was explored using the industry-level experiments of Al alloy cylindrical ingots (i.e. 630 mm in diameter and 6000 mm in length). When introducing power ultrasound, severe cavitation erosion was found to reproducibly occur at some specific positions on ultrasonic sonotrodes. However, there is no cavitation erosion present on the ultrasonic sonotrodes that were not driven by electric generator. Vibratory examination showed cavitation erosion depended on the vibration state of ultrasonic sonotrodes. Moreover, a finite element (FE) model was developed to simulate the evolution and distribution of acoustic pressure in 3-D solidification volume. FE simulation results confirmed that significant dynamic interaction between sonotrodes and melts only happened at some specific positions corresponding to severe cavitation erosion. This work will allow for developing more advanced ultrasonic sonotrodes with better cavitation erosion-resistance, in particular for large-scale castings, from the perspectives of ultrasonic physics and mechanical design. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Effect of three-body interactions on the zero-temperature equation of state of HCP solid 4He

    NASA Astrophysics Data System (ADS)

    Barnes, Ashleigh L.; Hinde, Robert J.

    2017-03-01

    Previous studies have pointed to the importance of three-body interactions in high density 4He solids. However the computational cost often makes it unfeasible to incorporate these interactions into the simulation of large systems. We report the implementation and evaluation of a computationally efficient perturbative treatment of three-body interactions in hexagonal close packed solid 4He utilizing the recently developed nonadditive three-body potential of Cencek et al. This study represents the first application of the Cencek three-body potential to condensed phase 4He systems. Ground state energies from quantum Monte Carlo simulations, with either fully incorporated or perturbatively treated three-body interactions, are calculated in systems with molar volumes ranging from 21.3 cm3/mol down to 2.5 cm3/mol. These energies are used to derive the zero-temperature equation of state for comparison against existing experimental and theoretical data. The equations of state derived from both perturbative and fully incorporated three-body interactions are found to be in very good agreement with one another, and reproduce the experimental pressure-volume data with significantly better accuracy than is obtained when only two-body interactions are considered. At molar volumes below approximately 4.0 cm3/mol, neither two-body nor three-body equations of state are able to accurately reproduce the experimental pressure-volume data, suggesting that below this molar volume four-body and higher many-body interactions are becoming important.

  1. Test plan for evaluating the operational performance of the prototype nested, fixed-depth fluidic sampler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    REICH, F.R.

    The PHMC will provide Low Activity Wastes (LAW) tank wastes for final treatment by a privatization contractor from two double-shell feed tanks, 241-AP-102 and 241-AP-104. Concerns about the inability of the baseline ''grab'' sampling to provide large volume samples within time constraints has led to the development of a nested, fixed-depth sampling system. This sampling system will provide large volume, representative samples without the environmental, radiation exposure, and sample volume impacts of the current base-line ''grab'' sampling method. A plan has been developed for the cold testing of this nested, fixed-depth sampling system with simulant materials. The sampling system willmore » fill the 500-ml bottles and provide inner packaging to interface with the Hanford Sites cask shipping systems (PAS-1 and/or ''safe-send''). The sampling system will provide a waste stream that will be used for on-line, real-time measurements with an at-tank analysis system. The cold tests evaluate the performance and ability to provide samples that are representative of the tanks' content within a 95 percent confidence interval, to sample while mixing pumps are operating, to provide large sample volumes (1-15 liters) within a short time interval, to sample supernatant wastes with over 25 wt% solids content, to recover from precipitation- and settling-based plugging, and the potential to operate over the 20-year expected time span of the privatization contract.« less

  2. The Universe at Moderate Redshift

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Ostriker, Jeremiah P.

    1997-01-01

    The report covers the work done in the past year and a wide range of fields including properties of clusters of galaxies; topological properties of galaxy distributions in terms of galaxy types; patterns of gravitational nonlinear clustering process; development of a ray tracing algorithm to study the gravitational lensing phenomenon by galaxies, clusters and large-scale structure, one of whose applications being the effects of weak gravitational lensing by large-scale structure on the determination of q(0); the origin of magnetic fields on the galactic and cluster scales; the topological properties of Ly(alpha) clouds the Ly(alpha) optical depth distribution; clustering properties of Ly(alpha) clouds; and a determination (lower bound) of Omega(b) based on the observed Ly(alpha) forest flux distribution. In the coming year, we plan to continue the investigation of Ly(alpha) clouds using larger dynamic range (about a factor of two) and better simulations (with more input physics included) than what we have now. We will study the properties of galaxies on 1 - 100h(sup -1) Mpc scales using our state-of-the-art large scale galaxy formation simulations of various cosmological models, which will have a resolution about a factor of 5 (in each dimension) better than our current, best simulations. We will plan to study the properties of X-ray clusters using unprecedented, very high dynamic range (20,000) simulations which will enable us to resolve the cores of clusters while keeping the simulation volume sufficiently large to ensure a statistically fair sample of the objects of interest. The details of the last year's works are now described.

  3. Diffusive molecular dynamics simulations of lithiation of silicon nanopillars

    NASA Astrophysics Data System (ADS)

    Mendez, J. P.; Ponga, M.; Ortiz, M.

    2018-06-01

    We report diffusive molecular dynamics simulations concerned with the lithiation of Si nano-pillars, i.e., nano-sized Si rods held at both ends by rigid supports. The duration of the lithiation process is of the order of milliseconds, well outside the range of molecular dynamics but readily accessible to diffusive molecular dynamics. The simulations predict an alloy Li15Si4 at the fully lithiated phase, exceedingly large and transient volume increments up to 300% due to the weakening of Sisbnd Si iterations, a crystalline-to-amorphous-to-lithiation phase transition governed by interface kinetics, high misfit strains and residual stresses resulting in surface cracks and severe structural degradation in the form of extensive porosity, among other effects.

  4. Direct conversion of solar energy to thermal energy

    NASA Astrophysics Data System (ADS)

    Sizmann, Rudolf

    1986-12-01

    Selective coatings (cermets) were produced by simultaneous evaporation of copper and silicon dioxide, and analyzed by computer assisted spectral photometers and ellipsometers; hemispherical emittance was measured. Steady state test procedures for covered and uncovered collectors were investigated. A method for evaluating the transient behavior of collectors was developed. The derived transfer functions describe their transient behavior. A stochastic approach was used for reducing the meteorological data volume. Data sets which are statistically equivalent to the original data can be synthesized. A simulation program for solar systems using analytical solutions of differential equations was developed. A large solar DHW system was optimized by a detailed modular simulation program. A microprocessor assisted data aquisition records the four characteristics of solar cells and solar cell systems in less than 10 msec. Measurements of a large photovoltaic installation (50 sqm) are reported.

  5. High-Order Numerical Simulations of Wind Turbine Wakes

    NASA Astrophysics Data System (ADS)

    Kleusberg, E.; Mikkelsen, R. F.; Schlatter, P.; Ivanell, S.; Henningson, D. S.

    2017-05-01

    Previous attempts to describe the structure of wind turbine wakes and their mutual interaction were mostly limited to large-eddy and Reynolds-averaged Navier-Stokes simulations using finite-volume solvers. We employ the higher-order spectral-element code Nek5000 to study the influence of numerical aspects on the prediction of the wind turbine wake structure and the wake interaction between two turbines. The spectral-element method enables an accurate representation of the vortical structures, with lower numerical dissipation than the more commonly used finite-volume codes. The wind-turbine blades are modeled as body forces using the actuator-line method (ACL) in the incompressible Navier-Stokes equations. Both tower and nacelle are represented with appropriate body forces. An inflow boundary condition is used which emulates homogeneous isotropic turbulence of wind-tunnel flows. We validate the implementation with results from experimental campaigns undertaken at the Norwegian University of Science and Technology (NTNU Blind Tests), investigate parametric influences and compare computational aspects with existing numerical simulations. In general the results show good agreement between the experiments and the numerical simulations both for a single-turbine setup as well as a two-turbine setup where the turbines are offset in the spanwise direction. A shift in the wake center caused by the tower wake is detected similar to experiments. The additional velocity deficit caused by the tower agrees well with the experimental data. The wake is captured well by Nek5000 in comparison with experiments both for the single wind turbine and in the two-turbine setup. The blade loading however shows large discrepancies for the high-turbulence, two-turbine case. While the experiments predicted higher thrust for the downstream turbine than for the upstream turbine, the opposite case was observed in Nek5000.

  6. The influence of voxel size on atom probe tomography data.

    PubMed

    Torres, K L; Daniil, M; Willard, M A; Thompson, G B

    2011-05-01

    A methodology for determining the optimal voxel size for phase thresholding in nanostructured materials was developed using an atom simulator and a model system of a fixed two-phase composition and volume fraction. The voxel size range was banded by the atom count within each voxel. Some voxel edge lengths were found to be too large, resulting in an averaging of compositional fluctuations; others were too small with concomitant decreases in the signal-to-noise ratio for phase identification. The simulated methodology was then applied to the more complex experimentally determined data set collected from a (Co(0.95)Fe(0.05))(88)Zr(6)Hf(1)B(4)Cu(1) two-phase nanocomposite alloy to validate the approach. In this alloy, Zr and Hf segregated to an intergranular amorphous phase while Fe preferentially segregated to a crystalline phase during the isothermal annealing step that promoted primary crystallization. The atom probe data analysis of the volume fraction was compared to transmission electron microscopy (TEM) dark-field imaging analysis and a lever rule analysis of the volume fraction within the amorphous and crystalline phases of the ribbon. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Lee-Yang zero analysis for the study of QCD phase structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ejiri, Shinji

    2006-03-01

    We comment on the Lee-Yang zero analysis for the study of the phase structure of QCD at high temperature and baryon number density by Monte-Carlo simulations. We find that the sign problem for nonzero density QCD induces a serious problem in the finite volume scaling analysis of the Lee-Yang zeros for the investigation of the order of the phase transition. If the sign problem occurs at large volume, the Lee-Yang zeros will always approach the real axis of the complex parameter plane in the thermodynamic limit. This implies that a scaling behavior which would suggest a crossover transition will notmore » be obtained. To clarify this problem, we discuss the Lee-Yang zero analysis for SU(3) pure gauge theory as a simple example without the sign problem, and then consider the case of nonzero density QCD. It is suggested that the distribution of the Lee-Yang zeros in the complex parameter space obtained by each simulation could be more important information for the investigation of the critical endpoint in the (T,{mu}{sub q}) plane than the finite volume scaling behavior.« less

  8. Determination of component volumes of lipid bilayers from simulations.

    PubMed Central

    Petrache, H I; Feller, S E; Nagle, J F

    1997-01-01

    An efficient method for extracting volumetric data from simulations is developed. The method is illustrated using a recent atomic-level molecular dynamics simulation of L alpha phase 1,2-dipalmitoyl-sn-glycero-3-phosphocholine bilayer. Results from this simulation are obtained for the volumes of water (VW), lipid (V1), chain methylenes (V2), chain terminal methyls (V3), and lipid headgroups (VH), including separate volumes for carboxyl (Vcoo), glyceryl (Vgl), phosphoryl (VPO4), and choline (Vchol) groups. The method assumes only that each group has the same average volume regardless of its location in the bilayer, and this assumption is then tested with the current simulation. The volumes obtained agree well with the values VW and VL that have been obtained directly from experiment, as well as with the volumes VH, V2, and V3 that require certain assumptions in addition to the experimental data. This method should help to support and refine some assumptions that are necessary when interpreting experimental data. Images FIGURE 4 PMID:9129826

  9. The potential of coordinated reservoir operation for flood mitigation in large basins - A case study on the Bavarian Danube using coupled hydrological-hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Seibert, S. P.; Skublics, D.; Ehret, U.

    2014-09-01

    The coordinated operation of reservoirs in large-scale river basins has great potential to improve flood mitigation. However, this requires large scale hydrological models to translate the effect of reservoir operation to downstream points of interest, in a quality sufficient for the iterative development of optimized operation strategies. And, of course, it requires reservoirs large enough to make a noticeable impact. In this paper, we present and discuss several methods dealing with these prerequisites for reservoir operation using the example of three major floods in the Bavarian Danube basin (45,000 km2) and nine reservoirs therein: We start by presenting an approach for multi-criteria evaluation of model performance during floods, including aspects of local sensitivity to simulation quality. Then we investigate the potential of joint hydrologic-2d-hydrodynamic modeling to improve model performance. Based on this, we evaluate upper limits of reservoir impact under idealized conditions (perfect knowledge of future rainfall) with two methods: Detailed simulations and statistical analysis of the reservoirs' specific retention volume. Finally, we investigate to what degree reservoir operation strategies optimized for local (downstream vicinity to the reservoir) and regional (at the Danube) points of interest are compatible. With respect to model evaluation, we found that the consideration of local sensitivities to simulation quality added valuable information not included in the other evaluation criteria (Nash-Sutcliffe efficiency and Peak timing). With respect to the second question, adding hydrodynamic models to the model chain did, contrary to our expectations, not improve simulations, despite the fact that under idealized conditions (using observed instead of simulated lateral inflow) the hydrodynamic models clearly outperformed the routing schemes of the hydrological models. Apparently, the advantages of hydrodynamic models could not be fully exploited when fed by output from hydrological models afflicted with systematic errors in volume and timing. This effect could potentially be reduced by joint calibration of the hydrological-hydrodynamic model chain. Finally, based on the combination of the simulation-based and statistical impact assessment, we identified one reservoir potentially useful for coordinated, regional flood mitigation for the Danube. While this finding is specific to our test basin, the more interesting and generally valid finding is that operation strategies optimized for local and regional flood mitigation are not necessarily mutually exclusive, sometimes they are identical, sometimes they can, due to temporal offsets, be pursued simultaneously.

  10. Simulation of Amorphous Silicon Anode in Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Wang, Miao

    The energy density of the current generation of Li-ion batteries (LIBs) is only about 1% of that of gasoline. Improving the energy density of the rechargeable battery is critical for vehicle electrification. Employing high capacity electrode materials is a key factor in this endeavor. Silicon (Si) is one of the high capacity anode materials for LIBs. However, Si experiences large volume variation (up to 300%) during battery cycling, which affects the structural integrity of the battery and results in rapid capacity fading. It has been shown that the cycle life of Si anode can be improved significantly through various novel electrode designs. So far, such work is conducted through experiments. Numerical simulations have the potentials for design optimization of LIBs, as demonstrated in multiphysics models for LIBs with graphite anode. This research extends a previously developed microstructure-resolved multiphysics (MRM) battery model to LIBs with a-Si anode. The MRM model considers the electrochemical reactions, Li transport in electrodes and electrolyte, Li insertion induced volume change, mechanical strains and stresses, material property evolution with lithiation, and the chemo-mechanical coupling. The model is solved using finite element package COMSOL Multiphysics. The major challenges in this work are the large deformation of the Si, and the uncertainty in parameters and the coupling relation. To simulate the large deformation of Si, a large strain based formulation for the concentration induced volume expansion was used. The electrolyte was modeled as fluid. A method to simulate the galvanostatic charge/discharge of a finite deformation electrode with moving boundary was developed. Important model parameters were determined one by one by correlating the simulation to appropriate experiments. For example, the Li diffusivity in Si reported in literature varies from 10-13 to 10-19 m2/s. To estimate this parameter, the experiment of two-phase lithiation of a-Si nanospheres in-situ in a transmission electron microscope was simulated. The diffusivity was found at the order of 10-17m2/s for the lithium poor phase in first lithiation and 10-15m2/s for lithium rich phase and in subsequent cycles. The reaction rate constant and the apparent transfer coefficient are determined in a similar way using different experiments. In literature, different forms of chemo-mechanical coupling theories have been proposed for Li diffusion in Si. The coupling relationship and parameters were often derived based on one type of experiment even though the process is highly coupled. In this work, the chemo-mechanical coupling was investigated by simulations of two geometries: a thin film and a sphere. A strong asymmetric rate behavior between lithiation and delithiation has been observed in thin film a-Si anode but not in other geometries. The results reveal that the rate behavior is affected by the geometry and the constraint of the electrode, the chemo-mechanical coupling, and the prior process. A substrate-constrained film has a relatively low surface/volume ratio and a constant surface area. Its lithiation has a great tendency to be hindered by surface limitation. The chemo-mechanical coupling plays an important role in the specific rate behavior of a geometry. Finally, an MRM model was built for a half cell with a-Si nanowalls as anode. The specific and volumetric capacities of the cell as a function of size, length/size ratio, spacing of the nanostructure, and the Li+ concentration in electrolyte were investigated. The results show that the factors reducing the concentration polarization can enhance the maximum achievable SOC of the cell. However, the cell with the highest SOC does not necessarily lead to the highest capacity.

  11. Implementation of a roughness element to trip transition in large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Boudet, J.; Monier, J.-F.; Gao, F.

    2015-02-01

    In aerodynamics, the laminar or turbulent regime of a boundary layer has a strong influence on friction or heat transfer. In practical applications, it is sometimes necessary to trip the transition to turbulent, and a common way is by use of a roughness element ( e.g. a step) on the wall. The present paper is concerned with the numerical implementation of such a trip in large-eddy simulations. The study is carried out on a flat-plate boundary layer configuration, with Reynolds number Rex=1.3×106. First, this work brings the opportunity to introduce a practical methodology to assess convergence in large-eddy simulations. Second, concerning the trip implementation, a volume source term is proposed and is shown to yield a smoother and faster transition than a grid step. Moreover, it is easier to implement and more adaptable. Finally, two subgrid-scale models are tested: the WALE model of Nicoud and Ducros ( Flow Turbul. Combust., vol. 62, 1999) and the shear-improved Smagorinsky model of Lévêque et al. ( J. Fluid Mech., vol. 570, 2007). Both models allow transition, but the former appears to yield a faster transition and a better prediction of friction in the turbulent regime.

  12. Measuring global monopole velocities, one by one

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl

    We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less

  13. UH-60A Black Hawk engineering simulation program. Volume 1: Mathematical model

    NASA Technical Reports Server (NTRS)

    Howlett, J. J.

    1981-01-01

    A nonlinear mathematical model of the UR-60A Black Hawk helicopter was developed. This mathematical model, which was based on the Sikorsky General Helicopter (Gen Hel) Flight Dynamics Simulation, provides NASA with an engineering simulation for performance and handling qualities evaluations. This mathematical model is total systems definition of the Black Hawk helicopter represented at a uniform level of sophistication considered necessary for handling qualities evaluations. The model is a total force, large angle representation in six rigid body degrees of freedom. Rotor blade flapping, lagging, and hub rotational degrees of freedom are also represented. In addition to the basic helicopter modules, supportive modules were defined for the landing interface, power unit, ground effects, and gust penetration. Information defining the cockpit environment relevant to pilot in the loop simulation is presented.

  14. State-space reduction and equivalence class sampling for a molecular self-assembly model.

    PubMed

    Packwood, Daniel M; Han, Patrick; Hitosugi, Taro

    2016-07-01

    Direct simulation of a model with a large state space will generate enormous volumes of data, much of which is not relevant to the questions under study. In this paper, we consider a molecular self-assembly model as a typical example of a large state-space model, and present a method for selectively retrieving 'target information' from this model. This method partitions the state space into equivalence classes, as identified by an appropriate equivalence relation. The set of equivalence classes H, which serves as a reduced state space, contains none of the superfluous information of the original model. After construction and characterization of a Markov chain with state space H, the target information is efficiently retrieved via Markov chain Monte Carlo sampling. This approach represents a new breed of simulation techniques which are highly optimized for studying molecular self-assembly and, moreover, serves as a valuable guideline for analysis of other large state-space models.

  15. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  16. Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition. [NEEDS Information Adaptive System

    NASA Technical Reports Server (NTRS)

    Daluge, D. R.; Ruedger, W. H.

    1981-01-01

    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered.

  17. Simulations of four-dimensional simplicial quantum gravity as dynamical triangulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agishtein, M.E.; Migdal, A.A.

    1992-04-20

    In this paper, Four-Dimensional Simplicial Quantum Gravity is simulated using the dynamical triangulation approach. The authors studied simplicial manifolds of spherical topology and found the critical line for the cosmological constant as a function of the gravitational one, separating the phases of opened and closed Universe. When the bare cosmological constant approaches this line from above, the four-volume grows: the authors reached about 5 {times} 10{sup 4} simplexes, which proved to be sufficient for the statistical limit of infinite volume. However, for the genuine continuum theory of gravity, the parameters of the lattice model should be further adjusted to reachmore » the second order phase transition point, where the correlation length grows to infinity. The authors varied the gravitational constant, and they found the first order phase transition, similar to the one found in three-dimensional model, except in 4D the fluctuations are rather large at the transition point, so that this is close to the second order phase transition. The average curvature in cutoff units is large and positive in one phase (gravity), and small negative in another (antigravity). The authors studied the fractal geometry of both phases, using the heavy particle propagator to define the geodesic map, as well as with the old approach using the shortest lattice paths.« less

  18. Generating equilateral random polygons in confinement

    NASA Astrophysics Data System (ADS)

    Diao, Y.; Ernst, C.; Montemayor, A.; Ziegler, U.

    2011-10-01

    One challenging problem in biology is to understand the mechanism of DNA packing in a confined volume such as a cell. It is known that confined circular DNA is often knotted and hence the topology of the extracted (and relaxed) circular DNA can be used as a probe of the DNA packing mechanism. However, in order to properly estimate the topological properties of the confined circular DNA structures using mathematical models, it is necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths that are confined in a volume such as a sphere of certain fixed radius. Finding efficient algorithms that properly sample the space of such confined equilateral random polygons is a difficult problem. In this paper, we propose a method that generates confined equilateral random polygons based on their probability distribution. This method requires the creation of a large database initially. However, once the database has been created, a confined equilateral random polygon of length n can be generated in linear time in terms of n. The errors introduced by the method can be controlled and reduced by the refinement of the database. Furthermore, our numerical simulations indicate that these errors are unbiased and tend to cancel each other in a long polygon.

  19. The motion of a train of vesicles in channel flow

    NASA Astrophysics Data System (ADS)

    Barakat, Joseph; Shaqfeh, Eric

    2017-11-01

    The inertialess motion of a train of lipid-bilayer vesicles flowing through a channel is simulated using a 3D boundary integral equation method. Steady-state results are reported for vesicles positioned concentrically inside cylindrical channels of circular, square, and rectangular cross sections. The vesicle translational velocity U and excess channel pressure drop Δp+ depend strongly on the ratio of the vesicle radius to the hydraulic radius λ and the vesicle reduced volume υ. ``Deflated vesicles'' of lower reduced volume υ are more streamlined and translate with greater velocity U relative to the mean flow velocity V. Increasing the vesicle size (λ) increases the wall friction force and extra pressure drop Δp+, which in turn reduces the vesicle velocity U. Hydrodynamic interactions between vesicles in a periodic train are largely screened by the channel walls, in accordance with previous results for spheres and drops. The hydraulic resistance is compared across different cross sections, and a simple correction factor is proposed to unify the results. Nonlinear effects are observed when β - the ratio of membrane bending elasticity to viscous traction - is changed. The simulation results show excellent agreement with available experimental measurements as well as a previously reported ``small-gap theory'' valid for large values of λ. NSF CBET 1066263/1066334.

  20. Use of high-volume outdoor smog chamber photo-reactors for studying physical and chemical atmospheric aerosol formation and composition

    NASA Astrophysics Data System (ADS)

    Borrás, E.; Ródenas, M.; Vera, T.; Muñoz, A.

    2015-12-01

    The atmospheric particulate matter has a large impact on climate, biosphere behaviour and human health. Its study is complex because of large number of species are present at low concentrations and the continuous time evolution, being not easily separable from meteorology, and transport processes. Closed systems have been proposed by isolating specific reactions, pollutants or products and controlling the oxidizing environment. High volume simulation chambers, such as EUropean PHOtoREactor (EUPHORE), are an essential tool used to simulate atmospheric photochemical reactions. This communication describes the last results about the reactivity of prominent atmospheric pollutants and the subsequent particulate matter formation. Specific experiments focused on organic aerosols have been developed at the EUPHORE photo-reactor. The use of on-line instrumentation, supported by off-line techniques, has provided well-defined reaction profiles, physical properties, and up to 300 different species are determined in particulate matter. The application fields include the degradation of anthropogenic and biogenic pollutants, and pesticides under several atmospheric conditions, studying their contribution on the formation of secondary organic aerosols (SOA). The studies performed at the EUPHORE have improved the mechanistic studies of atmospheric degradation processes and the knowledge about the chemical and physical properties of atmospheric particulate matter formed during these processes.

  1. IMPACT OF VENTILATION FREQUENCY AND PARENCHYMAL STIFFNESS ON FLOW AND PRESSURE DISTRIBUTION IN A CANINE LUNG MODEL

    PubMed Central

    Amini, Reza; Kaczka, David W.

    2013-01-01

    To determine the impact of ventilation frequency, lung volume, and parenchymal stiffness on ventilation distribution, we developed an anatomically-based computational model of the canine lung. Each lobe of the model consists of an asymmetric branching airway network subtended by terminal, viscoelastic acinar units. The model allows for empiric dependencies of airway segment dimensions and parenchymal stiffness on transpulmonary pressure. We simulated the effects of lung volume and parenchymal recoil on global lung impedance and ventilation distribution from 0.1 to 100 Hz, with mean transpulmonary pressures from 5 to 25 cmH2O. With increasing lung volume, the distribution of acinar flows narrowed and became more synchronous for frequencies below resonance. At higher frequencies, large variations in acinar flow were observed. Maximum acinar flow occurred at first antiresonance frequency, where lung impedance achieved a local maximum. The distribution of acinar pressures became very heterogeneous and amplified relative to tracheal pressure at the resonant frequency. These data demonstrate the important interaction between frequency and lung tissue stiffness on the distribution of acinar flows and pressures. These simulations provide useful information for the optimization of frequency, lung volume, and mean airway pressure during conventional ventilation or high frequency oscillation (HFOV). Moreover our model indicates that an optimal HFOV bandwidth exists between the resonant and antiresonant frequencies, for which interregional gas mixing is maximized. PMID:23872936

  2. Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation

    NASA Astrophysics Data System (ADS)

    Tobon-Gomez, Catalina; Sukno, Federico M.; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F.

    2012-07-01

    Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18% LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.

  3. Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.

    PubMed

    Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F

    2012-07-07

    Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.

  4. Molecular dynamics simulation of three plastic additives' diffusion in polyethylene terephthalate.

    PubMed

    Li, Bo; Wang, Zhi-Wei; Lin, Qin-Bao; Hu, Chang-Ying

    2017-06-01

    Accurate diffusion coefficient data of additives in a polymer are of paramount importance for estimating the migration of the additives over time. This paper shows how this diffusion coefficient can be estimated for three plastic additives [2-(2'-hydroxy-5'-methylphenyl) (UV-P), 2,6-di-tert-butyl-4-methylphenol (BHT) and di-(2-ethylhexyl) phthalate (DEHP)] in polyethylene terephthalate (PET) using the molecular dynamics (MD) simulation method. MD simulations were performed at temperatures of 293-433 K. The diffusion coefficient was calculated through the Einstein relationship connecting the data of mean-square displacement at different times. Comparison of the diffusion coefficients simulated by the MD simulation technique, predicted by the Piringer model and experiments, showed that, except for a few samples, the MD-simulated values were in agreement with the experimental values within one order of magnitude. Furthermore, the diffusion process for additives is discussed in detail, and four factors - the interaction energy between additive molecules and PET, fractional free volume, molecular shape and size, and self-diffusion of the polymer - are proposed to illustrate the microscopic diffusion mechanism. The movement trajectories of additives in PET cell models suggested that the additive molecules oscillate slowly rather than hopping for a long time. Occasionally, when a sufficiently large hole was created adjacently, the molecule could undergo spatial motion by jumping into the free-volume hole and consequently start a continuous oscillation and hop. The results indicate that MD simulation is a useful approach for predicting the microstructure and diffusion coefficient of plastic additives, and help to estimate the migration level of additives from PET packaging.

  5. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    PubMed Central

    Deeley, M A; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, E; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Yei, F; Koyama, T; Ding, G X; Dawant, B M

    2011-01-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation (STAPLE) algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8–0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4–0.5. Similarly low DSC have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (−4.3, +5.4) mm for the automatic system to (−3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms. PMID:21725140

  6. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    NASA Astrophysics Data System (ADS)

    Deeley, M. A.; Chen, A.; Datteri, R.; Noble, J. H.; Cmelak, A. J.; Donnelly, E. F.; Malcolm, A. W.; Moretti, L.; Jaboin, J.; Niermann, K.; Yang, Eddy S.; Yu, David S.; Yei, F.; Koyama, T.; Ding, G. X.; Dawant, B. M.

    2011-07-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.

  7. New Methodology for Computing Subaerial Landslide-Tsunamis: Application to the 2015 Tyndall Glacier Landslide, Alaska

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.; Cannon, C. M.

    2016-12-01

    Landslide-generated tsunamis pose significant hazards to coastal communities and infrastructure, but developing models to assess these hazards presents challenges beyond those confronted when modeling seismically generated tsunamis. We present a new methodology in which our depth-averaged two-phase model D-Claw (Proc. Roy. Soc. A, 2014, doi: 10.1098/rspa.2013.0819 and doi:10.1098/rspa.2013.0820) is used to simulate all stages of landslide dynamics and subsequent tsunami generation and propagation. D-Claw was developed to simulate landslides and debris-flows, but if granular solids are absent, then the D-Claw equations reduce to the shallow-water equations commonly used to model tsunamis. Because the model describes the evolution of solid and fluid volume fractions, it treats both landslides and tsunamis as special cases of a more general class of phenomena, and the landslide and tsunami can be simulated as a single-layer continuum with spatially and temporally evolving solid-grain concentrations. This seamless approach accommodates wave generation via mass displacement and longitudinal momentum transfer, the dominant mechanisms producing impulse waves when large subaerial landslides impact relatively shallow bodies of water. To test our methodology, we used D-Claw to model a large subaerial landslide and resulting tsunami that occurred on October, 17, 2015, in Taan Fjord near the terminus of Tyndall Glacier, Alaska. The estimated landslide volume derived from radiated long-period seismicity (C. Stark (2015), Abstract EP51D-08, AGU Fall Meeting) was about 70-80 million cubic meters. Guided by satellite imagery and this volume estimate, we inferred an approximate landslide basal slip surface, and we used material property values identical to those used in our previous modeling of the 2014 Oso, Washington, landslide. With these inputs the modeled tsunami inundation patterns on shorelines compare well with observations derived from satellite imagery.

  8. Visualizing the Big (and Large) Data from an HPC Resource

    NASA Astrophysics Data System (ADS)

    Sisneros, R.

    2015-10-01

    Supercomputers are built to endure painfully large simulations and contend with resulting outputs. These are characteristics that scientists are all too willing to test the limits of in their quest for science at scale. The data generated during a scientist's workflow through an HPC center (large data) is the primary target for analysis and visualization. However, the hardware itself is also capable of generating volumes of diagnostic data (big data); this presents compelling opportunities to deploy analogous analytic techniques. In this paper we will provide a survey of some of the many ways in which visualization and analysis may be crammed into the scientific workflow as well as utilized on machine-specific data.

  9. An aggregation-volume-bias Monte Carlo investigation on the condensation of a Lennard-Jones vapor below the triple point and crystal nucleation in cluster systems: an in-depth evaluation of the classical nucleation theory.

    PubMed

    Chen, Bin; Kim, Hyunmi; Keasler, Samuel J; Nellas, Ricky B

    2008-04-03

    The aggregation-volume-bias Monte Carlo based simulation technique, which has led to our recent success in vapor-liquid nucleation research, was extended to the study of crystal nucleation processes. In contrast to conventional bulk-phase techniques, this method deals with crystal nucleation events in cluster systems. This approach was applied to the crystal nucleation of Lennard-Jonesium under a wide range of undercooling conditions from 35% to 13% below the triple point. It was found that crystal nucleation in these model clusters proceeds initially via a vapor-liquid like aggregation followed by the formation of crystals inside the aggregates. The separation of these two stages of nucleation is distinct except at deeper undercooling conditions where the crystal nucleation barrier was found to diminish. The simulation results obtained for these two nucleation steps are separately compared to the classical nucleation theory (CNT). For the vapor-liquid nucleation step, the CNT was shown to provide a reasonable description of the critical cluster size but overestimate the barrier heights, consistent with previous simulation studies. On the contrary, for the crystal nucleation step, nearly perfect agreement with the barrier heights was found between the simulations and the CNT. For the critical cluster size, the comparison is more difficult as the simulation data were found to be sensitive to the definition of the solid cluster, but a stringent criterion and lower undercooling conditions generally lead to results closer with the CNT. Additional simulations at undercooling conditions of 40% or above indicate a nearly barrierless transition from the liquid to crystalline-like structure for sufficiently large clusters, which leads to further departure of the barrier height predicted by the CNT from the simulation data for the aggregation step. This is consistent with the latest experimental results on argon that show an unusually large underestimation of the nucleation rate by the CNT toward deep undercooling conditions.

  10. SU-E-T-480: Radiobiological Dose Comparison of Single Fraction SRS, Multi-Fraction SRT and Multi-Stage SRS of Large Target Volumes Using the Linear-Quadratic Formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, C; Hrycushko, B; Jiang, S

    2014-06-01

    Purpose: To compare the radiobiological effect on large tumors and surrounding normal tissues from single fraction SRS, multi-fractionated SRT, and multi-staged SRS treatment. Methods: An anthropomorphic head phantom with a centrally located large volume target (18.2 cm{sup 3}) was scanned using a 16 slice large bore CT simulator. Scans were imported to the Multiplan treatment planning system where a total prescription dose of 20Gy was used for a single, three staged and three fractionated treatment. Cyber Knife treatment plans were inversely optimized for the target volume to achieve at least 95% coverage of the prescription dose. For the multistage plan,more » the target was segmented into three subtargets having similar volume and shape. Staged plans for individual subtargets were generated based on a planning technique where the beam MUs of the original plan on the total target volume are changed by weighting the MUs based on projected beam lengths within each subtarget. Dose matrices for each plan were export in DICOM format and used to calculate equivalent dose distributions in 2Gy fractions using an alpha beta ratio of 10 for the target and 3 for normal tissue. Results: Singe fraction SRS, multi-stage plan and multi-fractionated SRT plans had an average 2Gy dose equivalent to the target of 62.89Gy, 37.91Gy and 33.68Gy, respectively. The normal tissue within 12Gy physical dose region had an average 2Gy dose equivalent of 29.55Gy, 16.08Gy and 13.93Gy, respectively. Conclusion: The single fraction SRS plan had the largest predicted biological effect for the target and the surrounding normal tissue. The multi-stage treatment provided for a more potent biologically effect on target compared to the multi-fraction SRT treatments with less biological normal tissue than single-fraction SRS treatment.« less

  11. Measurement of blood loss during postpartum haemorrhage.

    PubMed

    Lilley, G; Burkett-St-Laurent, D; Precious, E; Bruynseels, D; Kaye, A; Sanders, J; Alikhan, R; Collins, P W; Hall, J E; Collis, R E

    2015-02-01

    We set out to validate the accuracy of gravimetric quantification of blood loss during simulated major postpartum haemorrhage and to evaluate the technique in a consecutive cohort of women experiencing major postpartum haemorrhage. The study took part in a large UK delivery suite over a one-year period. All women who experienced major postpartum haemorrhage were eligible for inclusion. For the validation exercise, in a simulated postpartum haemorrhage scenario using known volumes of artificial blood, the accuracy of gravimetric measurement was compared with visual estimation made by delivery suite staff. In the clinical observation study, the blood volume lost during postpartum haemorrhage was measured gravimetrically according to our routine institutional protocol and was correlated with fall in haemoglobin. The main outcome measure was the accuracy of gravimetric measurement of blood loss. Validation exercise: the mean percentage error of gravimetrically measured blood volume was 4.0±2.7% compared to visually estimated blood volume with a mean percentage error of 34.7±32.1%. Clinical observation study: 356 out of 6187 deliveries were identified as having major postpartum haemorrhage. The correlation coefficient between measured blood loss and corrected fall in haemoglobin for all patients was 0.77; correlation was stronger (0.80) for postpartum haemorrhage >1500mL, and similar during routine and out-of-hours working. The accuracy of the gravimetric method was confirmed in simulated postpartum haemorrhage. The clinical study shows that gravimetric measurement of blood loss is correlated with the fall in haemoglobin in postpartum haemorrhage where blood loss exceeds 1500mL. The method is simple to perform, requires only basic equipment, and can be taught and used by all maternity services during major postpartum haemorrhage. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. The Outdoor Atmospheric Simulation Chamber of Orleans-France (HELIOS)

    NASA Astrophysics Data System (ADS)

    Mellouki, A.; Véronique, D.; Grosselin, B.; Peyroux, F.; Benoit, R.; Ren, Y.; Idir, M.

    2016-12-01

    Atmospheric simulation chambers are among the most advanced tools for investigating the atmospheric processes to derive physico-chemical parameters which are required for air quality and climate models. Recently, the ICARE-CNRS at Orléans (France) has set up a new large outdoor simulation chamber, HELIOS. HELIOS is one of the most advanced simulation chambers in Europe. It is one of the largest outdoor chambers and is especially suited to processes studies performed under realistic atmospheric conditions. HELIOS is a large hemispherical outdoor simulation chamber (volume of 90 m3) positioned on the top of ICARE-CNRS building at Orléans (47°50'18.39N; 1°56'40.03E). The chamber is made of FEP film ensuring more than 90 % solar light transmission. The chamber is protected against severe meteorological conditions by a moveable "box" which contains a series of Xenon lamps enabling to conduct experiments using artificial light. This special design makes HELIOS a unique platform where experiments can be made using both types of irradiations. HELIOS is dedicated mainly to the investigation of the chemical processes under different conditions (sunlight, artificial light and dark). The platform allows conducting the same type of experiments under both natural and artificial light irradiation. The available large range of complementary and highly sensitive instruments allows investigating the radical chemistry, gas phase processes and aerosol formation under realistic conditions. The characteristics of HELIOS will be presented as well as the first series of experimental results obtained so far.

  13. Cooling rate effects in sodium silicate glasses: Bridging the gap between molecular dynamics simulations and experiments

    NASA Astrophysics Data System (ADS)

    Li, Xin; Song, Weiying; Yang, Kai; Krishnan, N. M. Anoop; Wang, Bu; Smedskjaer, Morten M.; Mauro, John C.; Sant, Gaurav; Balonis, Magdalena; Bauchy, Mathieu

    2017-08-01

    Although molecular dynamics (MD) simulations are commonly used to predict the structure and properties of glasses, they are intrinsically limited to short time scales, necessitating the use of fast cooling rates. It is therefore challenging to compare results from MD simulations to experimental results for glasses cooled on typical laboratory time scales. Based on MD simulations of a sodium silicate glass with varying cooling rate (from 0.01 to 100 K/ps), here we show that thermal history primarily affects the medium-range order structure, while the short-range order is largely unaffected over the range of cooling rates simulated. This results in a decoupling between the enthalpy and volume relaxation functions, where the enthalpy quickly plateaus as the cooling rate decreases, whereas density exhibits a slower relaxation. Finally, we show that, using the proper extrapolation method, the outcomes of MD simulations can be meaningfully compared to experimental values when extrapolated to slower cooling rates.

  14. LES Modeling with Experimental Validation of a Compound Channel having Converging Floodplain

    NASA Astrophysics Data System (ADS)

    Mohanta, Abinash; Patra, K. C.

    2018-04-01

    Computational fluid dynamics (CFD) is often used to predict flow structures in developing areas of a flow field for the determination of velocity field, pressure, shear stresses, effect of turbulence and others. A two phase three-dimensional CFD model along with the large eddy simulation (LES) model is used to solve the turbulence equation. This study aims to validate CFD simulations of free surface flow or open channel flow by using volume of fluid method by comparing the data observed in hydraulics laboratory of the National Institute of Technology, Rourkela. The finite volume method with a dynamic sub grid scale was carried out for a constant aspect ratio and convergence condition. The results show that the secondary flow and centrifugal force influence flow pattern and show good agreement with experimental data. Within this paper over-bank flows have been numerically simulated using LES in order to predict accurate open channel flow behavior. The LES results are shown to accurately predict the flow features, specifically the distribution of secondary circulations both for in-bank channels as well as over-bank channels at varying depth and width ratios in symmetrically converging flood plain compound sections.

  15. A quantitative approach to the topology of large-scale structure. [for galactic clustering computation

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.

    1987-01-01

    A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.

  16. Partial volume correction using cortical surfaces

    NASA Astrophysics Data System (ADS)

    Blaasvær, Kamille R.; Haubro, Camilla D.; Eskildsen, Simon F.; Borghammer, Per; Otzen, Daniel; Ostergaard, Lasse R.

    2010-03-01

    Partial volume effect (PVE) in positron emission tomography (PET) leads to inaccurate estimation of regional metabolic activities among neighbouring tissues with different tracer concentration. This may be one of the main limiting factors in the utilization of PET in clinical practice. Partial volume correction (PVC) methods have been widely studied to address this issue. MRI based PVC methods are well-established.1 Their performance depend on the quality of the co-registration of the MR and PET dataset, on the correctness of the estimated point-spread function (PSF) of the PET scanner and largely on the performance of the segmentation method that divide the brain into brain tissue compartments.1, 2 In the present study a method for PVC is suggested, that utilizes cortical surfaces, to obtain detailed anatomical information. The objectives are to improve the performance of PVC, facilitate a study of the relationship between metabolic activity in the cerebral cortex and cortical thicknesses, and to obtain an improved visualization of PET data. The gray matter metabolic activity after performing PVC was recovered by 99.7 - 99.8 % , in relation to the true activity when testing on simple simulated data with different PSFs and by 97.9 - 100 % when testing on simulated brain PET data at different cortical thicknesses. When studying the relationship between metabolic activities and anatomical structures it was shown on simulated brain PET data, that it is important to correct for PVE in order to get the true relationship.

  17. Role of ion hydration for the differential capacitance of an electric double layer.

    PubMed

    Caetano, Daniel L Z; Bossa, Guilherme V; de Oliveira, Vinicius M; Brown, Matthew A; de Carvalho, Sidney J; May, Sylvio

    2016-10-12

    The influence of soft, hydration-mediated ion-ion and ion-surface interactions on the differential capacitance of an electric double layer is investigated using Monte Carlo simulations and compared to various mean-field models. We focus on a planar electrode surface at physiological concentration of monovalent ions in a uniform dielectric background. Hydration-mediated interactions are modeled on the basis of Yukawa potentials that add to the Coulomb and excluded volume interactions between ions. We present a mean-field model that includes hydration-mediated anion-anion, anion-cation, and cation-cation interactions of arbitrary strengths. In addition, finite ion sizes are accounted for through excluded volume interactions, described either on the basis of the Carnahan-Starling equation of state or using a lattice gas model. Both our Monte Carlo simulations and mean-field approaches predict a characteristic double-peak (the so-called camel shape) of the differential capacitance; its decrease reflects the packing of the counterions near the electrode surface. The presence of hydration-mediated ion-surface repulsion causes a thin charge-depleted region close to the surface, which is reminiscent of a Stern layer. We analyze the interplay between excluded volume and hydration-mediated interactions on the differential capacitance and demonstrate that for small surface charge density our mean-field model based on the Carnahan-Starling equation is able to capture the Monte Carlo simulation results. In contrast, for large surface charge density the mean-field approach based on the lattice gas model is preferable.

  18. A web portal for hydrodynamical, cosmological simulations

    NASA Astrophysics Data System (ADS)

    Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.

    2017-07-01

    This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.

  19. Technical Note: A minimally invasive experimental system for pCO2 manipulation in plankton cultures using passive gas exchange (atmospheric carbon control simulator)

    NASA Astrophysics Data System (ADS)

    Love, Brooke A.; Olson, M. Brady; Wuori, Tristen

    2017-05-01

    As research into the biotic effects of ocean acidification has increased, the methods for simulating these environmental changes in the laboratory have multiplied. Here we describe the atmospheric carbon control simulator (ACCS) for the maintenance of plankton under controlled pCO2 conditions, designed for species sensitive to the physical disturbance introduced by the bubbling of cultures and for studies involving trophic interaction. The system consists of gas mixing and equilibration components coupled with large-volume atmospheric simulation chambers. These chambers allow gas exchange to counteract the changes in carbonate chemistry induced by the metabolic activity of the organisms. The system is relatively low cost, very flexible, and when used in conjunction with semi-continuous culture methods, it increases the density of organisms kept under realistic conditions, increases the allowable time interval between dilutions, and/or decreases the metabolically driven change in carbonate chemistry during these intervals. It accommodates a large number of culture vessels, which facilitate multi-trophic level studies and allow the tracking of variable responses within and across plankton populations to ocean acidification. It also includes components that increase the reliability of gas mixing systems using mass flow controllers.

  20. Improvement of mathematical models for simulation of vehicle handling : volume 7 : technical manual for the general simulation

    DOT National Transportation Integrated Search

    1980-03-01

    This volume is the technical manual for the general simulation. Mathematical modelling of the vehicle and of the human driver is presented in detail, as are differences between the APL simulation and the current one. Information on model validation a...

  1. Numerical Experiments on Advective Transport in Large Three-Dimensional Discrete Fracture Networks

    NASA Astrophysics Data System (ADS)

    Makedonska, N.; Painter, S. L.; Karra, S.; Gable, C. W.

    2013-12-01

    Modeling of flow and solute transport in discrete fracture networks is an important approach for understanding the migration of contaminants in impermeable hard rocks such as granite, where fractures provide dominant flow and transport pathways. The discrete fracture network (DFN) model attempts to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. An integrated DFN meshing [1], flow, and particle tracking [2] simulation capability that enables accurate flow and particle tracking simulation on large DFNs has recently been developed. The new capability has been used in numerical experiments on advective transport in large DFNs with tens of thousands of fractures and millions of computational cells. The modeling procedure starts from the fracture network generation using a stochastic model derived from site data. A high-quality computational mesh is then generated [1]. Flow is then solved using the highly parallel PFLOTRAN [3] code. PFLOTRAN uses the finite volume approach, which is locally mass conserving and thus eliminates mass balance problems during particle tracking. The flow solver provides the scalar fluxes on each control volume face. From the obtained fluxes the Darcy velocity is reconstructed for each node in the network [4]. Velocities can then be continuously interpolated to any point in the domain of interest, thus enabling random walk particle tracking. In order to describe the flow field on fractures intersections, the control volume cells on intersections are split into four planar polygons, where each polygon corresponds to a piece of a fracture near the intersection line. Thus, computational nodes lying on fracture intersections have four associated velocities, one on each side of the intersection in each fracture plane [2]. This information is used to route particles arriving at the fracture intersection to the appropriate downstream fracture segment. Verified for small DFNs, the new simulation capability allows accurate particle tracking on more realistic representations of fractured rock sites. In the current work we focus on travel time statistics and spatial dispersion and show numerical results in DFNs of different sizes, fracture densities, and transmissivity distributions. [1] Hyman J.D., Gable C.W., Painter S.L., Automated meshing of stochastically generated discrete fracture networks, Abstract H33G-1403, 2011 AGU, San Francisco, CA, 5-9 Dec. [2] N. Makedonska, S. L. Painter, T.-L. Hsieh, Q.M. Bui, and C. W. Gable., Development and verification of a new particle tracking capability for modeling radionuclide transport in discrete fracture networks, Abstract, 2013 IHLRWM, Albuquerque, NM, Apr. 28 - May 3. [3] Lichtner, P.C., Hammond, G.E., Bisht, G., Karra, S., Mills, R.T., and Kumar, J. (2013) PFLOTRAN User's Manual: A Massively Parallel Reactive Flow Code. [4] Painter S.L., Gable C.W., Kelkar S., Pathline tracing on fully unstructured control-volume grids, Computational Geosciences, 16 (4), 2012, 1125-1134.

  2. How predictable is the timing of a summer ice-free Arctic?

    NASA Astrophysics Data System (ADS)

    Jahn, Alexandra; Kay, Jennifer E.; Holland, Marika M.; Hall, David M.

    2016-09-01

    Climate model simulations give a large range of over 100 years for predictions of when the Arctic could first become ice free in the summer, and many studies have attempted to narrow this uncertainty range. However, given the chaotic nature of the climate system, what amount of spread in the prediction of an ice-free summer Arctic is inevitable? Based on results from large ensemble simulations with the Community Earth System Model, we show that internal variability alone leads to a prediction uncertainty of about two decades, while scenario uncertainty between the strong (Representative Concentration Pathway (RCP) 8.5) and medium (RCP4.5) forcing scenarios adds at least another 5 years. Common metrics of the past and present mean sea ice state (such as ice extent, volume, and thickness) as well as global mean temperatures do not allow a reduction of the prediction uncertainty from internal variability.

  3. GCM Simulation of the Large-scale North American Monsoon Including Water Vapor Tracer Diagnostics

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Walker, Gregory; Schubert, Siegfried D.; Sud, Yogesh; Atlas, Robert M. (Technical Monitor)

    2001-01-01

    The geographic sources of water for the large-scale North American monsoon in a GCM are diagnosed using passive constituent tracers of regional water'sources (Water Vapor Tracers, WVT). The NASA Data Assimilation Office Finite Volume (FV) GCM was used to produce a 10-year simulation (1984 through 1993) including observed sea surface temperature. Regional and global WVT sources were defined to delineate the surface origin of water for precipitation in and around the North American i'vionsoon. The evolution of the mean annual cycle and the interannual variations of the monsoonal circulation will be discussed. Of special concern are the relative contributions of the local source (precipitation recycling) and remote sources of water vapor to the annual cycle and the interannual variation of warm season precipitation. The relationships between soil water, surface evaporation, precipitation and precipitation recycling will be evaluated.

  4. GCM Simulation of the Large-Scale North American Monsoon Including Water Vapor Tracer Diagnostics

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Walker, Gregory; Schubert, Siegfried D.; Sud, Yogesh; Atlas, Robert M. (Technical Monitor)

    2002-01-01

    The geographic sources of water for the large scale North American monsoon in a GCM (General Circulation Model) are diagnosed using passive constituent tracers of regional water sources (Water Vapor Tracers, WVT). The NASA Data Assimilation Office Finite Volume (FV) GCM was used to produce a 10-year simulation (1984 through 1993) including observed sea surface temperature. Regional and global WVT sources were defined to delineate the surface origin of water for precipitation in and around the North American Monsoon. The evolution of the mean annual cycle and the interannual variations of the monsoonal circulation will be discussed. Of special concern are the relative contributions of the local source (precipitation recycling) and remote sources of water vapor to the annual cycle and the interannual variation of monsoonal precipitation. The relationships between soil water, surface evaporation, precipitation and precipitation recycling will be evaluated.

  5. Diffusion and interactions of interstitials in hard-sphere interstitial solid solutions

    NASA Astrophysics Data System (ADS)

    van der Meer, Berend; Lathouwers, Emma; Smallenburg, Frank; Filion, Laura

    2017-12-01

    Using computer simulations, we study the dynamics and interactions of interstitial particles in hard-sphere interstitial solid solutions. We calculate the free-energy barriers associated with their diffusion for a range of size ratios and densities. By applying classical transition state theory to these free-energy barriers, we predict the diffusion coefficients, which we find to be in good agreement with diffusion coefficients as measured using event-driven molecular dynamics simulations. These results highlight that transition state theory can capture the interstitial dynamics in the hard-sphere model system. Additionally, we quantify the interactions between the interstitials. We find that, apart from excluded volume interactions, the interstitial-interstitial interactions are almost ideal in our system. Lastly, we show that the interstitial diffusivity can be inferred from the large-particle fluctuations alone, thus providing an empirical relationship between the large-particle fluctuations and the interstitial diffusivity.

  6. Volumetric Analysis of Alveolar Bone Defect Using Three-Dimensional-Printed Models Versus Computer-Aided Engineering.

    PubMed

    Du, Fengzhou; Li, Binghang; Yin, Ningbei; Cao, Yilin; Wang, Yongqian

    2017-03-01

    Knowing the volume of a graft is essential in repairing alveolar bone defects. This study investigates the 2 advanced preoperative volume measurement methods: three-dimensional (3D) printing and computer-aided engineering (CAE). Ten unilateral alveolar cleft patients were enrolled in this study. Their computed tomographic data were sent to 3D printing and CAE software. A simulated graft was used on the 3D-printed model, and the graft volume was measured by water displacement. The volume calculated by CAE software used mirror-reverses technique. The authors compared the actual volumes of the simulated grafts with the CAE software-derived volumes. The average volume of the simulated bone grafts by 3D-printed models was 1.52 mL, higher than the mean volume of 1.47 calculated by CAE software. The difference between the 2 volumes was from -0.18 to 0.42 mL. The paired Student t test showed no statistically significant difference between the volumes derived from the 2 methods. This study demonstrated that the mirror-reversed technique by CAE software is as accurate as the simulated operation on 3D-printed models in unilateral alveolar cleft patients. These findings further validate the use of 3D printing and CAE technique in alveolar defect repairing.

  7. Shock-driven fluid-structure interaction for civil design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Stephen L; Deiterding, Ralf

    The multiphysics fluid-structure interaction simulation of shock-loaded structures requires the dynamic coupling of a shock-capturing flow solver to a solid mechanics solver for large deformations. The Virtual Test Facility combines a Cartesian embedded boundary approach with dynamic mesh adaptation in a generic software framework of flow solvers using hydrodynamic finite volume upwind schemes that are coupled to various explicit finite element solid dynamics solvers (Deiterding et al., 2006). This paper gives a brief overview of the computational approach and presents first simulations that utilize the general purpose solid dynamics code DYNA3D for complex 3D structures of interest in civil engineering.more » Results from simulations of a reinforced column, highway bridge, multistory building, and nuclear reactor building are presented.« less

  8. Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs

    NASA Astrophysics Data System (ADS)

    Karimi-Fard, M.; Lapene, A.; Pauget, L.

    2012-12-01

    During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible. Karimi-Fard et al. [2] have developed an upscaling technique based on DFM representation. The original version of this technique was developed to construct a dual-porosity model from a discrete fracture description. This technique has been extended and generalized so it can be applied to a wide range of problems from reservoirs with a few or no fracture to highly fractured reservoirs. In this work, we present the application of these techniques to two three-dimensional fractured reservoirs constructed using real data. The first model contains more than 600 medium and large scale fractures. The fractures are not always connected which requires a general modeling technique. The reservoir has 50 wells (injectors and producers) and water flooding simulations are performed. The second test case is a larger reservoir with sparsely distributed faults. Single-phase simulations are performed with 5 producing wells. [1] Karimi-Fard M., Durlofsky L.J., and Aziz K. 2004. An efficient discrete-fracture model applicable for general-purpose reservoir simulators. SPE Journal, 9(2): 227-236. [2] Karimi-Fard M., Gong B., and Durlofsky L.J. 2006. Generation of coarse-scale continuum flow models from detailed fracture characterizations. Water Resources Research, 42(10): W10423.

  9. Dry Volume Fracturing Simulation of Shale Gas Reservoir

    NASA Astrophysics Data System (ADS)

    Xu, Guixi; Wang, Shuzhong; Luo, Xiangrong; Jing, Zefeng

    2017-11-01

    Application of CO2 dry fracturing technology to shale gas reservoir development in China has advantages of no water consumption, little reservoir damage and promoting CH4 desorption. This paper uses Meyer simulation to study complex fracture network extension and the distribution characteristics of shale gas reservoirs in the CO2 dry volume fracturing process. The simulation results prove the validity of the modified CO2 dry fracturing fluid used in shale volume fracturing and provides a theoretical basis for the following study on interval optimization of the shale reservoir dry volume fracturing.

  10. Simulation of shear thickening in attractive colloidal suspensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pednekar, Sidhant; Chun, Jaehun; Morris, Jeffrey F.

    2017-01-01

    The influence of attractive forces between particles under conditions of large particle volume fraction is addressed using numerical simulations which account for hydrodynamic, Brownian, conservative and frictional contact forces. The focus is on conditions for which a significant increase in the apparent viscosity at small shear rates, and possibly the development of a yield stress, is observed. The high shear rate behavior for Brownian suspensions has been shown in recent work [R. Mari, R. Seto, J. F. Morris & M. M. Denn, PNAS, 2015, 112, 15326-15330] to be captured by the inclusion of pairwise forces of two forms, one amore » contact frictional interaction and the second a repulsive force common in stabilized colloidal dispersions. Under such conditions, shear thickening is observed when shear stress is comparable to the sum of the Brownian stress and a characteristic stress based on the combination of interparticle force with kT the thermal energy. At sufficiently large volume fraction, this shear thickening can be very abrupt. Here it is shown that when attractive interactions are present with the noted forces, the shear thickening is obscured, as the viscosity shear thins with increasing shear rate, eventually descending from an infinite value (yield stress conditions) to a plateau at large stress; this plateau is at the same level as the large-shear rate viscosity found in the shear thickened state without attractive forces. It is shown that this behavior is consistent with prior observations in shear thickening suspensions modified to be attractive through depletion flocculation [V. Gopalakrishnan & C. F. Zukoski J. Rheol., 2004, 48, 1321-1344]. The contributions of the contact, attractive, and hydrodynamics forces to the bulk stress are presented, as are the contact networks found at different attractive strengths.« less

  11. Dynamic Antarctic ice sheet during the early to mid-Miocene

    PubMed Central

    DeConto, Robert M.; Pollard, David; Levy, Richard H.

    2016-01-01

    Geological data indicate that there were major variations in Antarctic ice sheet volume and extent during the early to mid-Miocene. Simulating such large-scale changes is problematic because of a strong hysteresis effect, which results in stability once the ice sheets have reached continental size. A relatively narrow range of atmospheric CO2 concentrations indicated by proxy records exacerbates this problem. Here, we are able to simulate large-scale variability of the early to mid-Miocene Antarctic ice sheet because of three developments in our modeling approach. (i) We use a climate–ice sheet coupling method utilizing a high-resolution atmospheric component to account for ice sheet–climate feedbacks. (ii) The ice sheet model includes recently proposed mechanisms for retreat into deep subglacial basins caused by ice-cliff failure and ice-shelf hydrofracture. (iii) We account for changes in the oxygen isotopic composition of the ice sheet by using isotope-enabled climate and ice sheet models. We compare our modeling results with ice-proximal records emerging from a sedimentological drill core from the Ross Sea (Andrill-2A) that is presented in a companion article. The variability in Antarctic ice volume that we simulate is equivalent to a seawater oxygen isotope signal of 0.52–0.66‰, or a sea level equivalent change of 30–36 m, for a range of atmospheric CO2 between 280 and 500 ppm and a changing astronomical configuration. This result represents a substantial advance in resolving the long-standing model data conflict of Miocene Antarctic ice sheet and sea level variability. PMID:26903645

  12. Local Multi-Channel RF Surface Coil versus Body RF Coil Transmission for Cardiac Magnetic Resonance at 3 Tesla: Which Configuration Is Winning the Game?

    PubMed

    Weinberger, Oliver; Winter, Lukas; Dieringer, Matthias A; Els, Antje; Oezerdem, Celal; Rieger, Jan; Kuehne, Andre; Cassara, Antonino M; Pfeiffer, Harald; Wetterling, Friedrich; Niendorf, Thoralf

    2016-01-01

    The purpose of this study was to demonstrate the feasibility and efficiency of cardiac MR at 3 Tesla using local four-channel RF coil transmission and benchmark it against large volume body RF coil excitation. Electromagnetic field simulations are conducted to detail RF power deposition, transmission field uniformity and efficiency for local and body RF coil transmission. For both excitation regimes transmission field maps are acquired in a human torso phantom. For each transmission regime flip angle distributions and blood-myocardium contrast are examined in a volunteer study of 12 subjects. The feasibility of the local transceiver RF coil array for cardiac chamber quantification at 3 Tesla is demonstrated. Our simulations and experiments demonstrate that cardiac MR at 3 Tesla using four-channel surface RF coil transmission is competitive versus current clinical CMR practice of large volume body RF coil transmission. The efficiency advantage of the 4TX/4RX setup facilitates shorter repetition times governed by local SAR limits versus body RF coil transmission at whole-body SAR limit. No statistically significant difference was found for cardiac chamber quantification derived with body RF coil versus four-channel surface RF coil transmission. Our simulation also show that the body RF coil exceeds local SAR limits by a factor of ~2 when driven at maximum applicable input power to reach the whole-body SAR limit. Pursuing local surface RF coil arrays for transmission in cardiac MR is a conceptually appealing alternative to body RF coil transmission, especially for patients with implants.

  13. YF-12 cooperative airframe/propulsion control system program, volume 1

    NASA Technical Reports Server (NTRS)

    Anderson, D. L.; Connolly, G. F.; Mauro, F. M.; Reukauf, P. J.; Marks, R. (Editor)

    1980-01-01

    Several YF-12C airplane analog control systems were converted to a digital system. Included were the air data computer, autopilot, inlet control system, and autothrottle systems. This conversion was performed to allow assessment of digital technology applications to supersonic cruise aircraft. The digital system was composed of a digital computer and specialized interface unit. A large scale mathematical simulation of the airplane was used for integration testing and software checkout.

  14. Exemplar for simulation challenges: Large-deformation micromechanics of Sylgard 184/glass microballoon syntactic foams.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Judith Alice; Long, Kevin Nicholas

    2018-05-01

    Sylgard® 184/Glass Microballoon (GMB) potting material is currently used in many NW systems. Analysts need a macroscale constitutive model that can predict material behavior under complex loading and damage evolution. To address this need, ongoing modeling and experimental efforts have focused on study of damage evolution in these materials. Micromechanical finite element simulations that resolve individual GMB and matrix components promote discovery and better understanding of the material behavior. With these simulations, we can study the role of the GMB volume fraction, time-dependent damage, behavior under confined vs. unconfined compression, and the effects of partial damage. These simulations are challengingmore » and push the boundaries of capability even with the high performance computing tools available at Sandia. We summarize the major challenges and the current state of this modeling effort, as an exemplar of micromechanical modeling needs that can motivate advances in future computing efforts.« less

  15. Pairwise Interaction Extended Point-Particle (PIEP) model for multiphase jets and sedimenting particles

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Balachandar, S.

    2017-11-01

    We perform a series of Euler-Lagrange direct numerical simulations (DNS) for multiphase jets and sedimenting particles. The forces the flow exerts on the particles in these two-way coupled simulations are computed using the Basset-Bousinesq-Oseen (BBO) equations. These forces do not explicitly account for particle-particle interactions, even though such pairwise interactions induced by the perturbations from neighboring particles may be important especially when the particle volume fraction is high. Such effects have been largely unaddressed in the literature. Here, we implement the Pairwise Interaction Extended Point-Particle (PIEP) model to simulate the effect of neighboring particle pairs. A simple collision model is also applied to avoid unphysical overlapping of solid spherical particles. The simulation results indicate that the PIEP model provides a more elaborative and complicated movement of the dispersed phase (droplets and particles). Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) project N00014-16-1-2617.

  16. Progress on the Development of the hPIC Particle-in-Cell Code

    NASA Astrophysics Data System (ADS)

    Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team

    2017-10-01

    Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.

  17. Numerical simulation for the air entrainment of aerated flow with an improved multiphase SPH model

    NASA Astrophysics Data System (ADS)

    Wan, Hang; Li, Ran; Pu, Xunchi; Zhang, Hongwei; Feng, Jingjie

    2017-11-01

    Aerated flow is a complex hydraulic phenomenon that exists widely in the field of environmental hydraulics. It is generally characterised by large deformation and violent fragmentation of the free surface. Compared to Euler methods (volume of fluid (VOF) method or rigid-lid hypothesis method), the existing single-phase Smooth Particle Hydrodynamics (SPH) method has performed well for solving particle motion. A lack of research on interphase interaction and air concentration, however, has affected the application of SPH model. In our study, an improved multiphase SPH model is presented to simulate aeration flows. A drag force was included in the momentum equation to ensure accuracy of the air particle slip velocity. Furthermore, a calculation method for air concentration is developed to analyse the air entrainment characteristics. Two studies were used to simulate the hydraulic and air entrainment characteristics. And, compared with the experimental results, the simulation results agree with the experimental results well.

  18. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    NASA Astrophysics Data System (ADS)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of neutrally stratified atmospheric boundary layers over heterogeneous terrain". Water Resources Research, 2006, 42, WO1409 (18 p). [4] J. Keissl, M. Parlange, C. Meneveau. "Field experimental study of dynamic Smagorinsky models in the atmospheric surface layer". Journal of the Atmospheric Science, 2004, 61, 2296-2307. [5] E. Bou-Zeid, N. Vercauteren, M.B. Parlange, C. Meneveau. "Scale dependence of subgrid-scale model coefficients: An a priori study". Physics of Fluids, 2008, 20, 115106. [6] G. Kirkil, J. Mirocha, E. Bou-Zeid, F.K. Chow, B. Kosovic, "Implementation and evaluation of dynamic subfilter - scale stress models for large - eddy simulation using WRF". Monthly Weather Review, 2012, 140, 266-284. [7] S. Radhakrishnan, U. Piomelli. "Large-eddy simulation of oscillating boundary layers: model comparison and validation". Journal of Geophysical Research, 2008, 113, C02022. [8] G. Usera, A. Vernet, J.A. Ferré. "A parallel block-structured finite volume method for flows in complex geometry with sliding interfaces". Flow, Turbulence and Combustion, 2008, 81, 471-495. [9] Y-T. Wu, F. Porté-Agel. "Large-eddy simulation of wind-turbine wakes: evaluation of turbine parametrisations". BoundaryLayerMeteorology, 2011, 138, 345-366.

  19. Discrete-event simulation of a wide-area health care network.

    PubMed Central

    McDaniel, J G

    1995-01-01

    OBJECTIVE: Predict the behavior and estimate the telecommunication cost of a wide-area message store-and-forward network for health care providers that uses the telephone system. DESIGN: A tool with which to perform large-scale discrete-event simulations was developed. Network models for star and mesh topologies were constructed to analyze the differences in performances and telecommunication costs. The distribution of nodes in the network models approximates the distribution of physicians, hospitals, medical labs, and insurers in the Province of Saskatchewan, Canada. Modeling parameters were based on measurements taken from a prototype telephone network and a survey conducted at two medical clinics. Simulation studies were conducted for both topologies. RESULTS: For either topology, the telecommunication cost of a network in Saskatchewan is projected to be less than $100 (Canadian) per month per node. The estimated telecommunication cost of the star topology is approximately half that of the mesh. Simulations predict that a mean end-to-end message delivery time of two hours or less is achievable at this cost. A doubling of the data volume results in an increase of less than 50% in the mean end-to-end message transfer time. CONCLUSION: The simulation models provided an estimate of network performance and telecommunication cost in a specific Canadian province. At the expected operating point, network performance appeared to be relatively insensitive to increases in data volume. Similar results might be anticipated in other rural states and provinces in North America where a telephone-based network is desired. PMID:7583646

  20. Hydrologic modeling of two glaciated watersheds in Northeast Pennsylvania

    USGS Publications Warehouse

    Srinivasan, M.S.; Hamlett, J.M.; Day, R.L.; Sams, J.I.; Petersen, G.W.

    1998-01-01

    A hydrologic modeling study, using the Hydrologic Simulation Program - FORTRAN (HSPF), was conducted in two glaciated watersheds, Purdy Creek and Ariel Creek in northeastern Pennsylvania. Both watersheds have wetlands and poorly drained soils due to low hydraulic conductivity and presence of fragipans. The HSPF model was calibrated in the Purdy Creek watershed and verified in the Ariel Creek watershed for June 1992 to December 1993 period. In Purdy Creek, the total volume of observed streamflow during the entire simulation period was 13.36 x 106 m3 and the simulated streamflow volume was 13.82 x 106 m3 (5 percent difference). For the verification simulation in Ariel Creek, the difference between the total observed and simulated flow volumes was 17 percent. Simulated peak flow discharges were within two hours of the observed for 30 of 46 peak flow events (discharge greater than 0.1 m3/sec) in Purdy Creek and 27 of 53 events in Ariel Creek. For 22 of the 46 events in Purdy Creek and 24 of 53 in Ariel Creek, the differences between the observed and simulated peak discharge rates were less than 30 percent. These 22 events accounted for 63 percent of total volume of streamflow observed during the selected 46 peak flow events in Purdy Creek. In Ariel Creek, these 24 peak flow events accounted for 62 percent of the total flow observed during all peak flow events. Differences in observed and simulated peak flow rates and volumes (on a percent basis) were greater during the snowmelt runoff events and summer periods than for other times.A hydrologic modeling study, using the Hydrologic Simulation Program - FORTRAN (HSPF), was conducted in two glaciated watersheds, Purdy Creek and Ariel Creek in northeastern Pennsylvania. Both watersheds have wetlands and poorly drained soils due to low hydraulic conductivity and presence of fragipans. The HSPF model was calibrated in the Purdy Creek watershed and verified in the Ariel Creek watershed for June 1992 to December 1993 period. In Purdy Creek, the total volume of observed streamflow during the entire simulation period was 13.36??106 m3 and the simulated streamflow volume was 13.82??106 m3 (5 percent difference). For the verification simulation in Ariel Creek, the difference between the total observed and simulated flow volumes was 17 percent. Simulated peak flow discharges were within two hours of the observed for 30 of 46 peak flow events (discharge greater than 0.1 m3/sec) in Purdy Creek and 27 of 53 events in Ariel Creek. For 22 of the 46 events in Purdy Creek and 24 of 53 in Ariel Creek, the differences between the observed and simulated peak discharge rates were less than 30 percent. These 22 events accounted for 63 percent of total volume of streamflow observed during the selected 46 peak flow events in Purdy Creek. In Ariel Creek, these 24 peak flow events accounted for 62 percent of the total flow observed during all peak flow events. Differences in observed and simulated peak flow rates and volumes (on a percent basis) were greater during the snowmelt runoff events and summer periods than for other times.

  1. Glass Property Data and Models for Estimating High-Level Waste Glass Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang

    2009-10-05

    This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition modelsmore » were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.« less

  2. Evolution of Local Microstructures: Spatial Instabilities of Coarsening Clusters

    NASA Technical Reports Server (NTRS)

    Frazier, Donald O.

    1999-01-01

    This work examines the diffusional growth of discrete phase particles dispersed within a matrix. Engineering materials are microstructurally heterogeneous, and the details of the microstructure determine how well that material performs in a given application. Critical to the development of designing multiphase microstructures with long-term stability is the process of Ostwald ripening. Ripening, or phase coarsening, is a diffusion-limited process which arises in polydisperse multiphase materials. Growth and dissolution occur because fluxes of solute, driven by chemical potential gradients at the interfaces of the dispersed phase material, depend on particle size. The kinetics of these processes are "competitive," dictating that larger particles grow at the expense of smaller ones, overall leading to an increase of the average particle size. The classical treatment of phase coarsening was done by Todes, Lifshitz, and Slyozov, (TLS) in the limit of zero volume fraction, V(sub v), of the dispersed phase. Since the publication of TLS theory there have been numerous investigations, many of which sought to describe the kinetic scaling behavior over a 0 range of volume fractions. Some studies in the literature report that the relative increase in coarsening rate at low (but not zero) volume fractions compared to that predicted by TLS is proportional to v(sub v)(exp 1/2), whereas others suggcest V(sub v)(exp 1/3). This issue has been resolved recently by simulation studies at low volume fractions in three dimensions by members of the Rensselaer/MSFC team. Our studies of ripening behavior using large-scale numerical simulations suggest that although there are different circumstances which can lead to either scaling law, the most important length scale at low volume fractions is the diffusional analog of the Debye screening length. The numerical simulations we employed exploit the use of a recently developed "snapshot" technique, and identifies the nature of the coarsening dynamics at various volume fractions. Preliminary results of numerical and experimental investigations, focused on the growth of finite particle clusters, provide important insight into the nature of the transition between the two scaling regimes. The companion microgravity experiment centers on the growth within finite particle clusters, and follows the temporal dynamics driving microstructural evolution, using holography.

  3. The study on servo-control system in the large aperture telescope

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Zhenchao, Zhang; Daxing, Wang

    2008-08-01

    Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.

  4. Computer Modeling to Evaluate the Impact of Technology Changes on Resident Procedural Volume.

    PubMed

    Grenda, Tyler R; Ballard, Tiffany N S; Obi, Andrea T; Pozehl, William; Seagull, F Jacob; Chen, Ryan; Cohn, Amy M; Daskin, Mark S; Reddy, Rishindra M

    2016-12-01

    As resident "index" procedures change in volume due to advances in technology or reliance on simulation, it may be difficult to ensure trainees meet case requirements. Training programs are in need of metrics to determine how many residents their institutional volume can support. As a case study of how such metrics can be applied, we evaluated a case distribution simulation model to examine program-level mediastinoscopy and endobronchial ultrasound (EBUS) volumes needed to train thoracic surgery residents. A computer model was created to simulate case distribution based on annual case volume, number of trainees, and rotation length. Single institutional case volume data (2011-2013) were applied, and 10 000 simulation years were run to predict the likelihood (95% confidence interval) of all residents (4 trainees) achieving board requirements for operative volume during a 2-year program. The mean annual mediastinoscopy volume was 43. In a simulation of pre-2012 board requirements (thoracic pathway, 25; cardiac pathway, 10), there was a 6% probability of all 4 residents meeting requirements. Under post-2012 requirements (thoracic, 15; cardiac, 10), however, the likelihood increased to 88%. When EBUS volume (mean 19 cases per year) was concurrently evaluated in the post-2012 era (thoracic, 10; cardiac, 0), the likelihood of all 4 residents meeting case requirements was only 23%. This model provides a metric to predict the probability of residents meeting case requirements in an era of changing volume by accounting for unpredictable and inequitable case distribution. It could be applied across operations, procedures, or disease diagnoses and may be particularly useful in developing resident curricula and schedules.

  5. Competitive Adsorption and Ordered Packing of Counterions near Highly Charged Surfaces: From Mean-Field Theory to Monte Carlo Simulations

    PubMed Central

    Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo

    2013-01-01

    Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson’s equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both of the mean-field theory and MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling. PMID:22680474

  6. Competitive adsorption and ordered packing of counterions near highly charged surfaces: From mean-field theory to Monte Carlo simulations.

    PubMed

    Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo

    2012-04-01

    Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect-included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson's equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both the mean-field theory and the MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling.

  7. Study of Hydrokinetic Turbine Arrays with Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Sale, Danny; Aliseda, Alberto

    2014-11-01

    Marine renewable energy is advancing towards commercialization, including electrical power generation from ocean, river, and tidal currents. The focus of this work is to develop numerical simulations capable of predicting the power generation potential of hydrokinetic turbine arrays-this includes analysis of unsteady and averaged flow fields, turbulence statistics, and unsteady loadings on turbine rotors and support structures due to interaction with rotor wakes and ambient turbulence. The governing equations of large-eddy-simulation (LES) are solved using a finite-volume method, and the presence of turbine blades are approximated by the actuator-line method in which hydrodynamic forces are projected to the flow field as a body force. The actuator-line approach captures helical wake formation including vortex shedding from individual blades, and the effects of drag and vorticity generation from the rough seabed surface are accounted for by wall-models. This LES framework was used to replicate a previous flume experiment consisting of three hydrokinetic turbines tested under various operating conditions and array layouts. Predictions of the power generation, velocity deficit and turbulence statistics in the wakes are compared between the LES and experimental datasets.

  8. REXOR 2 rotorcraft simulation model. Volume 1: Engineering documentation

    NASA Technical Reports Server (NTRS)

    Reaser, J. S.; Kretsinger, P. H.

    1978-01-01

    A rotorcraft nonlinear simulation called REXOR II, divided into three volumes, is described. The first volume is a development of rotorcraft mechanics and aerodynamics. The second is a development and explanation of the computer code required to implement the equations of motion. The third volume is a user's manual, and contains a description of code input/output as well as operating instructions.

  9. Airport Landside. Volume II. The Airport Landside Simulation Model (ALSIM) Description and Users Guide.

    DOT National Transportation Integrated Search

    1982-06-01

    This volume provides a general description of the Airport Landside Simulation Model. A summary of simulated passenger and vehicular processing through the landside is presented. Program operating characteristics and assumptions are documented and a c...

  10. ADHydro: A Large-scale High Resolution Multi-Physics Distributed Water Resources Model for Water Resource Simulations in a Parallel Computing Environment

    NASA Astrophysics Data System (ADS)

    lai, W.; Steinke, R. C.; Ogden, F. L.

    2013-12-01

    Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents development of a physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, includes: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow and water management. The ADHydro model uses the explicit finite volume method to solve PDEs for 2D overland flow, 2D saturated groundwater flow coupled to 1D channel flow. The model has a quasi-3D formulation that couples 2D overland flow and 2D saturated groundwater flow using the 1D Talbot-Ogden finite water-content infiltration and redistribution model. This eliminates difficulties in solving the highly nonlinear 3D Richards equation, while the finite volume Talbot-Ogden infiltration solution is computationally efficient, guaranteed to conserve mass, and allows simulation of the effect of near-surface groundwater tables on runoff generation. The process-level components of the model are being individually tested and validated. The model as a whole will be tested on the Green River basin in Wyoming and ultimately applied to the entire Upper Colorado River basin. ADHydro development has necessitated development of tools for large-scale watershed modeling, including open-source workflow steps to extract hydromorphological information from GIS data, integrate hydrometeorological and water management forcing input, and post-processing and visualization of large output data sets. The ADHydro model will be coupled with relevant components of the NOAH-MP land surface scheme and the WRF mesoscale meteorological model. Model objectives include well documented Application Programming Interfaces (APIs) to facilitate modifications and additions by others. We will release the model as open-source in 2014 and begin establishing a users' community.

  11. Application of foam-extend on turbulent fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Rege, K.; Hjertager, B. H.

    2017-12-01

    Turbulent flow around flexible structures is likely to induce structural vibrations which may eventually lead to fatigue failure. In order to assess the fatigue life of these structures, it is necessary to take the action of the flow on the structure into account, but also the influence of the vibrating structure on the fluid flow. This is achieved by performing fluid-structure interaction (FSI) simulations. In this work, we have investigated the capability of a FSI toolkit for the finite volume computational fluid dynamics software foam-extend to simulate turbulence-induced vibrations of a flexible structure. A large-eddy simulation (LES) turbulence model has been implemented to a basic FSI problem of a flexible wall which is placed in a confined, turbulent flow. This problem was simulated for 2.32 seconds. This short simulation required over 200 computation hours, using 20 processor cores. Thereby, it has been shown that the simulation of FSI with LES is possible, but also computationally demanding. In order to make turbulent FSI simulations with foam-extend more applicable, more sophisticated turbulence models and/or faster FSI iteration schemes should be applied.

  12. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.

    2016-07-01

    Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.

  13. Tri-FAST Hardware-in-the-Loop Simulation. Volume I. Tri-FAST Hardware-in-the-Loop Simulation at the Advanced Simulation Center

    DTIC Science & Technology

    1979-03-28

    TECHNICAL REPORT T-79-43 TRI- FAST HARDWARE-IN-THE-LOOP SIMULATION Volume 1: Trn FAST Hardware-In-the. Loop Simulation at the Advanced Simulation...Identify by block number) Tri- FAST Hardware-in-the-Loop ACSL Advanced Simulation Center Simulation RF Target Models I a. AfIACT ( sin -oveme skit N nem...e n tdositr by block number) The purpose of this report is to document the Tri- FAST missile simulation development and the seeker hardware-in-the

  14. A solution algorithm for fluid–particle flows across all flow regimes

    DOE PAGES

    Kong, Bo; Fox, Rodney O.

    2017-05-12

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  15. A solution algorithm for fluid-particle flows across all flow regimes

    NASA Astrophysics Data System (ADS)

    Kong, Bo; Fox, Rodney O.

    2017-09-01

    Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.

  16. A solution algorithm for fluid–particle flows across all flow regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kong, Bo; Fox, Rodney O.

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  17. Voxel based parallel post processor for void nucleation and growth analysis of atomistic simulations of material fracture.

    PubMed

    Hemani, H; Warrier, M; Sakthivel, N; Chaturvedi, S

    2014-05-01

    Molecular dynamics (MD) simulations are used in the study of void nucleation and growth in crystals that are subjected to tensile deformation. These simulations are run for typically several hundred thousand time steps depending on the problem. We output the atom positions at a required frequency for post processing to determine the void nucleation, growth and coalescence due to tensile deformation. The simulation volume is broken up into voxels of size equal to the unit cell size of crystal. In this paper, we present the algorithm to identify the empty unit cells (voids), their connections (void size) and dynamic changes (growth and coalescence of voids) for MD simulations of large atomic systems (multi-million atoms). We discuss the parallel algorithms that were implemented and discuss their relative applicability in terms of their speedup and scalability. We also present the results on scalability of our algorithm when it is incorporated into MD software LAMMPS. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Simulated pressure denaturation thermodynamics of ubiquitin.

    PubMed

    Ploetz, Elizabeth A; Smith, Paul E

    2017-12-01

    Simulations of protein thermodynamics are generally difficult to perform and provide limited information. It is desirable to increase the degree of detail provided by simulation and thereby the potential insight into the thermodynamic properties of proteins. In this study, we outline how to analyze simulation trajectories to decompose conformation-specific, parameter free, thermodynamically defined protein volumes into residue-based contributions. The total volumes are obtained using established methods from Fluctuation Solution Theory, while the volume decomposition is new and is performed using a simple proximity method. Native and fully extended ubiquitin are used as the test conformations. Changes in the protein volumes are then followed as a function of pressure, allowing for conformation-specific protein compressibility values to also be obtained. Residue volume and compressibility values indicate significant contributions to protein denaturation thermodynamics from nonpolar and coil residues, together with a general negative compressibility exhibited by acidic residues. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Comparison of volume and surface area nonpolar solvation free energy terms for implicit solvent simulations.

    PubMed

    Lee, Michael S; Olson, Mark A

    2013-07-28

    Implicit solvent models for molecular dynamics simulations are often composed of polar and nonpolar terms. Typically, the nonpolar solvation free energy is approximated by the solvent-accessible-surface area times a constant factor. More sophisticated approaches incorporate an estimate of the attractive dispersion forces of the solvent and∕or a solvent-accessible volume cavitation term. In this work, we confirm that a single volume-based nonpolar term most closely fits the dispersion and cavitation forces obtained from benchmark explicit solvent simulations of fixed protein conformations. Next, we incorporated the volume term into molecular dynamics simulations and find the term is not universally suitable for folding up small proteins. We surmise that while mean-field cavitation terms such as volume and SASA often tilt the energy landscape towards native-like folds, they also may sporadically introduce bottlenecks into the folding pathway that hinder the progression towards the native state.

  20. Comparison of volume and surface area nonpolar solvation free energy terms for implicit solvent simulations

    NASA Astrophysics Data System (ADS)

    Lee, Michael S.; Olson, Mark A.

    2013-07-01

    Implicit solvent models for molecular dynamics simulations are often composed of polar and nonpolar terms. Typically, the nonpolar solvation free energy is approximated by the solvent-accessible-surface area times a constant factor. More sophisticated approaches incorporate an estimate of the attractive dispersion forces of the solvent and/or a solvent-accessible volume cavitation term. In this work, we confirm that a single volume-based nonpolar term most closely fits the dispersion and cavitation forces obtained from benchmark explicit solvent simulations of fixed protein conformations. Next, we incorporated the volume term into molecular dynamics simulations and find the term is not universally suitable for folding up small proteins. We surmise that while mean-field cavitation terms such as volume and SASA often tilt the energy landscape towards native-like folds, they also may sporadically introduce bottlenecks into the folding pathway that hinder the progression towards the native state.

  1. High-Performance Reactive Particle Tracking with Adaptive Representation

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Benson, D. A.; Pankavich, S.

    2017-12-01

    Lagrangian particle tracking algorithms have been shown to be effective tools for modeling chemical reactions in imperfectly-mixed media. One disadvantage of these algorithms is the possible need to employ large numbers of particles in simulations, depending on the concentration covariance structure, and these large particle numbers can lead to long computation times. Two distinct approaches have recently arisen to overcome this. One method employs spatial kernels that are related to a specified, reduced particle number; however, over-wide kernels, dictated by a very low particle number, lead to an excess of reaction calculations and cause a reduction in performance. Another formulation involves hybrid particles that carry multiple species of reactant, wherein each particle is treated as its own well-mixed volume, obviating the need for large numbers of particles for each species but still requiring a fixed number of hybrid particles. Here, we combine these two approaches and demonstrate an improved method for simulating a given system in a computationally efficient manner. Additionally, the independent nature of transport and reaction calculations in this approach allows for significant gains via parallelization in an MPI or OpenMP context. For benchmarking, we choose a CO2 injection simulation with dissolution and precipitation of calcite and dolomite, allowing us to derive the proper treatment of interaction between solid and aqueous phases.

  2. A dual resolution measurement based Monte Carlo simulation technique for detailed dose analysis of small volume organs in the skull base region

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi

    2014-11-01

    The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm3] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm3 and was sandwiched in between 0.05×0.05×0.3 cm3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×108 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular canal. Dose volume histogram (DVH) analyses revealed much smoother DVH curves for the dual resolution sandwich phantom when compared to the SR phantom. In conclusion, MBMC simulations using a dual resolution sandwich phantom improved simulation spatial resolution for skull base IMRS therapy. More detailed dose analyses for small critical structures can be made available to help in clinical judgment.

  3. Origin of the cosmic network in ΛCDM: Nature vs nurture

    NASA Astrophysics Data System (ADS)

    Shandarin, Sergei; Habib, Salman; Heitmann, Katrin

    2010-05-01

    The large-scale structure of the Universe, as traced by the distribution of galaxies, is now being revealed by large-volume cosmological surveys. The structure is characterized by galaxies distributed along filaments, the filaments connecting in turn to form a percolating network. Our objective here is to quantitatively specify the underlying mechanisms that drive the formation of the cosmic network: By combining percolation-based analyses with N-body simulations of gravitational structure formation, we elucidate how the network has its origin in the properties of the initial density field (nature) and how its contrast is then amplified by the nonlinear mapping induced by the gravitational instability (nurture).

  4. Studying the properties and response of a large volume (946 cm3) LaBr3:Ce detector with γ-rays up to 22.5 MeV

    NASA Astrophysics Data System (ADS)

    Mazumdar, I.; Gothe, D. A.; Anil Kumar, G.; Yadav, N.; Chavan, P. B.; Patel, S. M.

    2013-03-01

    This paper presents the results of our measurements and detailed simulations using GEANT4 to investigate the performance of a large volume (946 cm3) cylindrical (3.5 in.diameter×6 in.length) LaBr3:Ce detector. The properties of the detector have been studied using γ-rays from radioactive sources and in-beam reaction, from few hundred keV to 22.5 MeV. The salient features, which have been studied in-depth, are the uniformity and internal activity of the crystal, the energy and timing resolutions, linearity of the response up to 22.5 MeV, and efficiencies. A highly linear response has been observed by extracting the energy signal from a lower dynode and operating the PMT at a low voltage. The detector is to be primarily used for measuring high energy γ-rays spectra from Giant Dipole Resonance (GDR) decay studies.

  5. PBSM3D: A finite volume, scalar-transport blowing snow model for use with variable resolution meshes

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Wayand, N. E.; Pomeroy, J. W.; Wheater, H. S.; Spiteri, R. J.

    2017-12-01

    Blowing snow redistribution results in heterogeneous snowcovers that are ubiquitous in cold, windswept environments. Capturing this spatial and temporal variability is important for melt and runoff simulations. Point scale blowing snow transport models are difficult to apply in fully distributed hydrological models due to landscape heterogeneity and complex wind fields. Many existing distributed snow transport models have empirical wind flow and/or simplified wind direction algorithms that perform poorly in calculating snow redistribution where there are divergent wind flows, sharp topography, and over large spatial extents. Herein, a steady-state scalar transport model is discretized using the finite volume method (FVM), using parameterizations from the Prairie Blowing Snow Model (PBSM). PBSM has been applied in hydrological response units and grids to prairie, arctic, glacier, and alpine terrain and shows a good capability to represent snow redistribution over complex terrain. The FVM discretization takes advantage of the variable resolution mesh in the Canadian Hydrological Model (CHM) to ensure efficient calculations over small and large spatial extents. Variable resolution unstructured meshes preserve surface heterogeneity but result in fewer computational elements versus high-resolution structured (raster) grids. Snowpack, soil moisture, and streamflow observations were used to evaluate CHM-modelled outputs in a sub-arctic and an alpine basin. Newly developed remotely sensed snowcover indices allowed for validation over large basins. CHM simulations of snow hydrology were improved by inclusion of the blowing snow model. The results demonstrate the key role of snow transport processes in creating pre-melt snowcover heterogeneity and therefore governing post-melt soil moisture and runoff generation dynamics.

  6. Initial parametric study of the flammability of plume releases in Hanford waste tanks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoniak, Z.I.; Recknagle, K.P.

    This study comprised systematic analyses of waste tank headspace flammability following a plume-type of gas release from the waste. First, critical parameters affecting plume flammability were selected, evaluated, and refined. As part of the evaluation the effect of ventilation (breathing) air inflow on the convective flow field inside the tank headspace was assessed, and the magnitude of the so-called {open_quotes}numerical diffusion{close_quotes} on numerical simulation accuracy was investigated. Both issues were concluded to be negligible influences on predicted flammable gas concentrations in the tank headspace. Previous validation of the TEMPEST code against experimental data is also discussed, with calculated results inmore » good agreements with experimental data. Twelve plume release simulations were then run, using release volumes and flow rates that were thought to cover the range of actual release volumes and rates. The results indicate that most plume-type releases remain flammable only during the actual release ends. Only for very large releases representing a significant fraction of the volume necessary to make the entire mixed headspace flammable (many thousands of cubic feet) can flammable concentrations persist for several hours after the release ends. However, as in the smaller plumes, only a fraction of the total release volume is flammable at any one time. The transient evolution of several plume sizes is illustrated in a number of color contour plots that provide insight into plume mixing behavior.« less

  7. Multi-scale modeling of multi-component reactive transport in geothermal aquifers

    NASA Astrophysics Data System (ADS)

    Nick, Hamidreza M.; Raoof, Amir; Wolf, Karl-Heinz; Bruhn, David

    2014-05-01

    In deep geothermal systems heat and chemical stresses can cause physical alterations, which may have a significant effect on flow and reaction rates. As a consequence it will lead to changes in permeability and porosity of the formations due to mineral precipitation and dissolution. Large-scale modeling of reactive transport in such systems is still challenging. A large area of uncertainty is the way in which the pore-scale information controlling the flow and reaction will behave at a larger scale. A possible choice is to use constitutive relationships relating, for example the permeability and porosity evolutions to the change in the pore geometry. While determining such relationships through laboratory experiments may be limited, pore-network modeling provides an alternative solution. In this work, we introduce a new workflow in which a hybrid Finite-Element Finite-Volume method [1,2] and a pore network modeling approach [3] are employed. Using the pore-scale model, relevant constitutive relations are developed. These relations are then embedded in the continuum-scale model. This approach enables us to study non-isothermal reactive transport in porous media while accounting for micro-scale features under realistic conditions. The performance and applicability of the proposed model is explored for different flow and reaction regimes. References: 1. Matthäi, S.K., et al.: Simulation of solute transport through fractured rock: a higher-order accurate finite-element finite-volume method permitting large time steps. Transport in porous media 83.2 (2010): 289-318. 2. Nick, H.M., et al.: Reactive dispersive contaminant transport in coastal aquifers: Numerical simulation of a reactive Henry problem. Journal of contaminant hydrology 145 (2012), 90-104. 3. Raoof A., et al.: PoreFlow: A Complex pore-network model for simulation of reactive transport in variably saturated porous media, Computers & Geosciences, 61, (2013), 160-174.

  8. Multibillion-atom Molecular Dynamics Simulations of Plasticity, Spall, and Ejecta

    NASA Astrophysics Data System (ADS)

    Germann, Timothy C.

    2007-06-01

    Modern supercomputing platforms, such as the IBM BlueGene/L at Lawrence Livermore National Laboratory and the Roadrunner hybrid supercomputer being built at Los Alamos National Laboratory, are enabling large-scale classical molecular dynamics simulations of phenomena that were unthinkable just a few years ago. Using either the embedded atom method (EAM) description of simple (close-packed) metals, or modified EAM (MEAM) models of more complex solids and alloys with mixed covalent and metallic character, simulations containing billions to trillions of atoms are now practical, reaching volumes in excess of a cubic micron. In order to obtain any new physical insights, however, it is equally important that the analysis of such systems be tractable. This is in fact possible, in large part due to our highly efficient parallel visualization code, which enables the rendering of atomic spheres, Eulerian cells, and other geometric objects in a matter of minutes, even for tens of thousands of processors and billions of atoms. After briefly describing the BlueGene/L and Roadrunner architectures, and the code optimization strategies that were employed, results obtained thus far on BlueGene/L will be reviewed, including: (1) shock compression and release of a defective EAM Cu sample, illustrating the plastic deformation accompanying void collapse as well as the subsequent void growth and linkup upon release; (2) solid-solid martensitic phase transition in shock-compressed MEAM Ga; and (3) Rayleigh-Taylor fluid instability modeled using large-scale direct simulation Monte Carlo (DSMC) simulations. I will also describe our initial experiences utilizing Cell Broadband Engine processors (developed for the Sony PlayStation 3), and planned simulation studies of ejecta and spall failure in polycrystalline metals that will be carried out when the full Petaflop Opteron/Cell Roadrunner supercomputer is assembled in mid-2008.

  9. Monte Carlo simulations of flexible polyanions complexing with whey proteins at their isoelectric point

    NASA Astrophysics Data System (ADS)

    de Vries, R.

    2004-02-01

    Electrostatic complexation of flexible polyanions with the whey proteins α-lactalbumin and β-lactoglobulin is studied using Monte Carlo simulations. The proteins are considered at their respective isoelectric points. Discrete charges on the model polyelectrolytes and proteins interact through Debye-Hückel potentials. Protein excluded volume is taken into account through a coarse-grained model of the protein shape. Consistent with experimental results, it is found that α-lactalbumin complexes much more strongly than β-lactoglobulin. For α-lactalbumin, strong complexation is due to localized binding to a single large positive "charge patch," whereas for β-lactoglobulin, weak complexation is due to diffuse binding to multiple smaller charge patches.

  10. Final Technical Report for ARRA Funding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rusack, Roger; Mans, Jeremiah; Poling, Ronald

    Final technical report of the University of Minnesota experimental high energy physics group for ARRA support. The Cryogenic Dark Matter Experiment (CDMS) used the funds received to construct a new passive shield to protect a high-purity germanium detector located in the Soudan mine in Northern Minnesota from cosmic rays. The BESIII and the CMS groups purchased computing hardware to assemble computer farms for data analysis and to generate large volumes of simulated data for comparison with the data collected.

  11. Equilibrium Wall Model Implementation in a Nodal Finite Element Flow Solver JENRE for Large Eddy Simulations

    DTIC Science & Technology

    2017-11-13

    condition is applied to the inviscid and viscous fluxes on the wall to satisfy the surface physical condition, but a non -zero surface tangential...velocity profiles and turbulence quantities predicted by the current wall-model implementation agree well with available experimental data and...implementations. The volume and surface integrals based on the non -zero surface velocity in a cell adjacent to the wall show a good agreement with those

  12. Parametric FEM for geometric biomembranes

    NASA Astrophysics Data System (ADS)

    Bonito, Andrea; Nochetto, Ricardo H.; Sebastian Pauletti, M.

    2010-05-01

    We consider geometric biomembranes governed by an L2-gradient flow for bending energy subject to area and volume constraints (Helfrich model). We give a concise derivation of a novel vector formulation, based on shape differential calculus, and corresponding discretization via parametric FEM using quadratic isoparametric elements and a semi-implicit Euler method. We document the performance of the new parametric FEM with a number of simulations leading to dumbbell, red blood cell and toroidal equilibrium shapes while exhibiting large deformations.

  13. Curriculum Development for Transfer Learning in Dynamic Multiagent Settings

    DTIC Science & Technology

    2016-06-01

    HFO) Half field offense [19] is a subtask of Robocup simulated soccer in which a team of m offensive players try to score a goal against n defensive... players while playing on one half of a soccer field. The domain poses many challenges, including a large, continuous state and action space, coordi...case study . In RoboCup-2006: Robot Soccer World Cup X, volume 4434 of Lecture Notes in Artificial Intelligence, pages 72–85. Springer Verlag, Berlin

  14. Mission and Objectives for the X-1 Advanced Radiation Source*

    NASA Astrophysics Data System (ADS)

    Rochau, Gary E.; Ramirez, Juan J.; Raglin, Paul S.

    1998-11-01

    Sandia National Laboratories PO Box 5800, MS-1178, Albuquerque, NM 87185 The X-1 Advanced Radiation Source represents a next step in providing the U.S. Department of Energy's Stockpile Stewardship Program with the high-energy, large volume, laboratory x-ray source for the Radiation Effects Science and Simulation, Inertial Confinement Fusion, and Weapon Physics Programs. Advances in fast pulsed power technology and in z-pinch hohlraums on Sandia National Laboratories' Z Accelerator provide sufficient basis for pursuing the development of X-1. The X-1 plan follows a strategy based on scaling the 2 MJ x-ray output on Z via a 3-fold increase in z-pinch load current. The large volume (>5 cm3), high temperature (>150 eV), temporally long (>10 ns) hohlraums are unique outside of underground nuclear weapon testing. Analytical scaling arguments and hydrodynamic simulations indicate that these hohlraums at temperatures of 230-300 eV will ignite thermonuclear fuel and drive the reaction to a yield of 200 to 1,200 MJ in the laboratory. Non-ignition sources will provide cold x-ray environments (<15 keV) and high yield fusion burn sources will provide high fidelity warm x-ray environments (15 keV-80 keV). This paper will introduce the X-1 Advanced Radiation Source Facility Project, describe the project mission, objective, and preliminary schedule.

  15. Efficient and robust compositional two-phase reservoir simulation in fractured media

    NASA Astrophysics Data System (ADS)

    Zidane, A.; Firoozabadi, A.

    2015-12-01

    Compositional and compressible two-phase flow in fractured media has wide applications including CO2 injection. Accurate simulations are currently based on the discrete fracture approach using the cross-flow equilibrium model. In this approach the fractures and a small part of the matrix blocks are combined to form a grid cell. The major drawback is low computational efficiency. In this work we use the discrete-fracture approach to model the fractures where the fracture entities are described explicitly in the computational domain. We use the concept of cross-flow equilibrium in the fractures (FCFE). This allows using large matrix elements in the neighborhood of the fractures. We solve the fracture transport equations implicitly to overcome the Courant-Freidricks-Levy (CFL) condition in the small fracture elements. Our implicit approach is based on calculation of the derivative of the molar concentration of component i in phase (cαi ) with respect to the total molar concentration (ci ) at constant volume V and temperature T. This contributes to significant speed up of the code. The hybrid mixed finite element method (MFE) is used to solve for the velocity in both the matrix and the fractures coupled with the discontinuous Galerkin (DG) method to solve the species transport equations in the matrix, and a finite volume (FV) discretization in the fractures. In large scale problems the proposed approach is orders of magnitude faster than the existing models.

  16. Offshore Storage Resource Assessment - Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savage, Bill; Ozgen, Chet

    The DOE developed volumetric equation for estimating Prospective Resources (CO 2 storage) in oil and gas reservoirs was utilized on each depleted field in the Federal GOM. This required assessment of the in-situ hydrocarbon fluid volumes for the fields under evaluation in order to apply the DOE equation. This project utilized public data from the U.S. Department of the Interior, Bureau of Ocean Energy Management (BOEM) Reserves database and from a well reputed, large database (250,000+ wells) of GOM well and production data marketed by IHS, Inc. IHS interpreted structure map files were also accessed for a limited number ofmore » fields. The databases were used along with geological and petrophysical software to identify depleted oil and gas fields in the Federal GOM region. BOEM arranged for access by the project team to proprietary reservoir level maps under an NDA. Review of the BOEM’s Reserves database as of December 31, 2013 indicated that 675 fields in the region were depleted. NITEC identified and rank these 675 fields containing 3,514 individual reservoirs based on BOEM’s estimated OOIP or OGIP values available in the Reserves database. The estimated BOEM OOIP or OGIP values for five fields were validated by an independent evaluation using available petrophysical, geologic and engineering data in the databases. Once this validation was successfully completed, the BOEM ranked list was used to calculate the estimated CO 2 storage volume for each field/reservoir using the DOE CO 2 Resource Estimate Equation. This calculation assumed a range for the CO 2 efficiency factor in the equation, as it was not known at that point in time. NITEC then utilize reservoir simulation to further enhance and refine the DOE equation estimated range of CO 2 storage volumes. NITEC used a purpose built, publically available, 4-component, compositional reservoir simulator developed under funding from DOE (DE-FE0006015) to assess CO 2-EOR and CO 2 storage in 73 fields/461 reservoirs. This simulator was fast and easy to utilize and provided a valuable enhanced assessment and refinement of the estimated CO 2 storage volume for each reservoir simulated. The user interface was expanded to allow for calculation of a probability based assessment of the CO 2 storage volume based on typical uncertainties in operating conditions and reservoir properties during the CO 2 injection period. This modeling of the CO 2 storage estimates for the simulated reservoirs resulted in definition of correlations applicable to all reservoir types (a refined DOE equation) which can be used for predictive purposes using available public data. Application of the correlations to the 675 depleted fields yielded a total CO 2 storage capacity of 4,748 MM tons. The CO 2 storage assessments were supplemented with simulation modeling of eleven (11) oil reservoirs that quantified the change in the stored CO 2 storage volume with the addition of CO 2-EOR (Enhanced Oil Recovery) production. Application of CO 2-EOR to oil reservoirs resulted in higher volumes of CO 2 storage.« less

  17. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  18. Parameter studies on the energy balance closure problem using large-eddy simulation

    NASA Astrophysics Data System (ADS)

    De Roo, Frederik; Banerjee, Tirtha; Mauder, Matthias

    2017-04-01

    The imbalance of the surface energy budget in eddy-covariance measurements is still a pending problem. A possible cause is the presence of land surface heterogeneity. Heterogeneities of the boundary layer scale or larger are most effective in influencing the boundary layer turbulence, and large-eddy simulations have shown that secondary circulations within the boundary layer can affect the surface energy budget. However, the precise influence of the surface characteristics on the energy imbalance and its partitioning is still unknown. To investigate the influence of surface variables on all the components of the flux budget under convective conditions, we set up a systematic parameter study by means of large-eddy simulation. For the study we use a virtual control volume approach, and we focus on idealized heterogeneity by considering spatially variable surface fluxes. The surface fluxes vary locally in intensity and these patches have different length scales. The main focus lies on heterogeneities of length scales of the kilometer scale and one decade smaller. For each simulation, virtual measurement towers are positioned at functionally different positions. We discriminate between the locally homogeneous towers, located within land use patches, with respect to the more heterogeneous towers, and find, among others, that the flux-divergence and the advection are strongly linearly related within each class. Furthermore, we seek correlators for the energy balance ratio and the energy residual in the simulations. Besides the expected correlation with measurable atmospheric quantities such as the friction velocity, boundary-layer depth and temperature and moisture gradients, we have also found an unexpected correlation with the temperature difference between sonic temperature and surface temperature. In additional simulations with a large number of virtual towers, we investigate higher order correlations, which can be linked to secondary circulations. In a companion presentation (EGU2017-2130) these correlations are investigated and confirmed with the help of micrometeorological measurements from the TERENO sites where the effects of landscape scale surface heterogeneities are deemed to be important.

  19. Training Community Modeling and Simulation Business Plan, 2007 Edition. Volume 1: Review of Training Capabilities

    DTIC Science & Technology

    2009-02-01

    Simulation Business Plan, 2007 Edition Volume I: Review of Training Capabilities J.D. Fletcher, IDA Frederick E. Hartman , IDA Robert Halayko, Addx Corp...Community Modeling and Simulation Business Plan, 2007 Edition Volume I: Review of Training Capabilities J.D. Fletcher, IDA Frederick E. Hartman , IDA...Steering Committee for the training community led by the Office of the Under Secretary of Defense (Personnel and Readiness), OUSD( P &R). The task was

  20. Cardiovascular simulator improvement: pressure versus volume loop assessment.

    PubMed

    Fonseca, Jeison; Andrade, Aron; Nicolosi, Denys E C; Biscegli, José F; Leme, Juliana; Legendre, Daniel; Bock, Eduardo; Lucchi, Julio Cesar

    2011-05-01

    This article presents improvement on a physical cardiovascular simulator (PCS) system. Intraventricular pressure versus intraventricular volume (PxV) loop was obtained to evaluate performance of a pulsatile chamber mimicking the human left ventricle. PxV loop shows heart contractility and is normally used to evaluate heart performance. In many heart diseases, the stroke volume decreases because of low heart contractility. This pathological situation must be simulated by the PCS in order to evaluate the assistance provided by a ventricular assist device (VAD). The PCS system is automatically controlled by a computer and is an auxiliary tool for VAD control strategies development. This PCS system is according to a Windkessel model where lumped parameters are used for cardiovascular system analysis. Peripheral resistance, arteries compliance, and fluid inertance are simulated. The simulator has an actuator with a roller screw and brushless direct current motor, and the stroke volume is regulated by the actuator displacement. Internal pressure and volume measurements are monitored to obtain the PxV loop. Left chamber internal pressure is directly obtained by pressure transducer; however, internal volume has been obtained indirectly by using a linear variable differential transformer, which senses the diaphragm displacement. Correlations between the internal volume and diaphragm position are made. LabVIEW integrates these signals and shows the pressure versus internal volume loop. The results that have been obtained from the PCS system show PxV loops at different ventricle elastances, making possible the simulation of pathological situations. A preliminary test with a pulsatile VAD attached to PCS system was made. © 2011, Copyright the Authors. Artificial Organs © 2011, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  1. Mobile flow cytometer for mHealth.

    PubMed

    Balsam, Joshua; Bruck, Hugh Alan; Rasooly, Avraham

    2015-01-01

    Flow cytometry is used for cell counting and analysis in numerous clinical and environmental applications. However flow cytometry is not used in mHealth mainly because current flow cytometers are large, expensive, power-intensive devices designed to operate in a laboratory. Their design results in a lack of portability and makes them unsuitable for mHealth applications. Another limitation of current technology is the low volumetric throughput rates that are not suitable for rapid detection of rare cells.To address these limitations, we describe here a novel, low-cost, mobile flow cytometer based on wide-field imaging with a webcam for large volume and high throughput fluorescence detection of rare cells as a simulation for circulating tumor cells (CTCs) detection. The mobile flow cytometer uses a commercially available webcam capable of 187 frames per second video capture at a resolution of 320 × 240 pixels. For fluorescence detection, a 1 W 450 nm blue laser is used for excitation of Syto-9 fluorescently stained cells detected at 535 nm. A wide-field flow cell was developed for large volume analysis that allows for the linear velocity of target cells to be lower than in conventional hydrodynamic focusing flow cells typically used in cytometry. The mobile flow cytometer was found to be capable of detecting low concentrations at flow rates of 500 μL/min, suitable for rare cell detection in large volumes. The simplicity and low cost of this device suggests that it may have a potential clinical use for mHealth flow cytometry for resource-poor settings associated with global health.

  2. Comparing centralised and decentralised anaerobic digestion of stillage from a large-scale bioethanol plant to animal feed production.

    PubMed

    Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R

    2008-01-01

    A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.

  3. Effective charges and virial pressure of concentrated macroion solutions

    DOE PAGES

    Boon, Niels; Guerrero-García, Guillermo Ivan; van Roij, René; ...

    2015-07-13

    The stability of colloidal suspensions is crucial in a wide variety of processes, including the fabrication of photonic materials and scaffolds for biological assemblies. The ionic strength of the electrolyte that suspends charged colloids is widely used to control the physical properties of colloidal suspensions. The extensively used two-body Derjaguin-Landau-Verwey-Overbeek (DLVO) approach allows for a quantitative analysis of the effective electrostatic forces between colloidal particles. DLVO relates the ionic double layers, which enclose the particles, to their effective electrostatic repulsion. Nevertheless, the double layer is distorted at high macroion volume fractions. Therefore, DLVO cannot describe the many-body effects that arisemore » in concentrated suspensions. In this paper, we show that this problem can be largely resolved by identifying effective point charges for the macroions using cell theory. This extrapolated point charge (EPC) method assigns effective point charges in a consistent way, taking into account the excluded volume of highly charged macroions at any concentration, and thereby naturally accounting for high volume fractions in both salt-free and added-salt conditions. We provide an analytical expression for the effective pair potential and validate the EPC method by comparing molecular dynamics simulations of macroions and monovalent microions that interact via Coulombic potentials to simulations of macroions interacting via the derived EPC effective potential. The simulations reproduce the macroion-macroion spatial correlation and the virial pressure obtained with the EPC model. Finally, our findings provide a route to relate the physical properties such as pressure in systems of screened Coulomb particles to experimental measurements.« less

  4. Modeling a CO2 mineralization experiment of fractured peridotite from the Semail ophiolite/ Oman

    NASA Astrophysics Data System (ADS)

    Muller, Nadja; Zhang, Guoxiang; van Noort, Reinier; Spiers, Chris; Ten Grotenhuis, Saskia; Hoedeman, Gerco

    2010-05-01

    Most geologic CO2 sequestration technologies focus on sedimentary rocks, where the carbon dioxide is stored in a fluid phase. A possible alternative is to trap it as a mineral in the subsurface (in-situ) in basaltic or even (ultra)mafic rocks. Carbon dioxide in aqueous solution reacts with Mg-, Ca-, and Fe-bearing silicate minerals, precipitates as (MgCa,Fe)CO3 (carbonate), and can thus be permanently sequestered. The cation donors are silicate minerals such as olivine and pyroxene which are abundant in (ultra)mafic rocks, such as peridotite. Investigations are underway to evaluate the sequestration potential of the Semail Ophiolite in Oman, utilizing the large volumes of partially serpentinized peridotite that are present. Key factors are the rate of mineralization due to dissolution of the peridotite and precipitation of carbonate, the extent of the natural and hydraulic fracture network and the accessibility of the rock to reactive fluids. To quantify the influence of dissolution rates on the overall CO2 mineralization process, small, fractured peridotite samples were exposed to supercritical CO2 and water in laboratory experiments. The samples are cored from a large rock sample in the dimension of small cylinders with 1 cm in height and diameter, with a mass of ~2g. Several experimental conditions were tested with different equipment, from large volume autoclave to small volume cold seal vessel. The 650 ml autoclave contained 400-500g of water and a sample under 10 MPa of partial CO2 pressure up to 150. The small capsules in the cold seal vessel held 1-1.5g of water and the sample under CO2 partial pressure from 15MPa to 70 MPa and temperature from 60 to 200°C. The samples remained for two weeks in the reaction vessels. In addition, bench acid bath experiments in 150 ml vials were performed open to the atmosphere at 50-80°C and pH of ~3. The main observation was that the peridotite dissolved two orders of magnitude slower in the high pressure and temperature cell of the cold seal vessel than comparative experiments in large volume autoclaves and bench acid bath vials under lower and atmospheric pressure conditions. We attributed this observation to the limited water availability in the cold seal vessel, limiting the aqueous reaction of bi-carbonate formation and magnesite precipitation. To test this hypothesis, one of the cold seal vessel experiments at 20 MPa and 100°C was simulated with a reactive transport model, using TOUGHREACT. To simulate the actual experimental conditions, the model used a grid on mm and 100's of μm scale and a fractured peridotite medium with serpentine filling the fractures. The simulation produced dissolution comparable to the experiment and showed an effective shut down of the bi-carbonation reaction within one day after the start of the experiment. If the conditions of limited water supply seen in our experiments are applicable in a field setting, we could expect dissolution may be limited by the buffering of the pH and shut down of the bi-carbonate formation. Under field conditions water and CO2 will only flow in hydraulic induced fractures and the natural fracture network that is filled with serpentine and some carbonate. The simulation result and potential implication for the field application will require further experimental investigation in the lab or field in the future.

  5. Ultrafast, sensitive and large-volume on-chip real-time PCR for the molecular diagnosis of bacterial and viral infections.

    PubMed

    Houssin, Timothée; Cramer, Jérémy; Grojsman, Rébecca; Bellahsene, Lyes; Colas, Guillaume; Moulet, Hélène; Minnella, Walter; Pannetier, Christophe; Leberre, Maël; Plecis, Adrien; Chen, Yong

    2016-04-21

    To control future infectious disease outbreaks, like the 2014 Ebola epidemic, it is necessary to develop ultrafast molecular assays enabling rapid and sensitive diagnoses. To that end, several ultrafast real-time PCR systems have been previously developed, but they present issues that hinder their wide adoption, notably regarding their sensitivity and detection volume. An ultrafast, sensitive and large-volume real-time PCR system based on microfluidic thermalization is presented herein. The method is based on the circulation of pre-heated liquids in a microfluidic chip that thermalize the PCR chamber by diffusion and ultrafast flow switches. The system can achieve up to 30 real-time PCR cycles in around 2 minutes, which makes it the fastest PCR thermalization system for regular sample volume to the best of our knowledge. After biochemical optimization, anthrax and Ebola simulating agents could be respectively detected by a real-time PCR in 7 minutes and a reverse transcription real-time PCR in 7.5 minutes. These detections are respectively 6.4 and 7.2 times faster than with an off-the-shelf apparatus, while conserving real-time PCR sample volume, efficiency, selectivity and sensitivity. The high-speed thermalization also enabled us to perform sharp melting curve analyses in only 20 s and to discriminate amplicons of different lengths by rapid real-time PCR. This real-time PCR microfluidic thermalization system is cost-effective, versatile and can be then further developed for point-of-care, multiplexed, ultrafast and highly sensitive molecular diagnoses of bacterial and viral diseases.

  6. Orthogonal recursive bisection data decomposition for high performance computing in cardiac model simulations: dependence on anatomical geometry.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J

    2009-01-01

    Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.

  7. Numerical investigations on flow dynamics of prismatic granular materials using the discrete element method

    NASA Astrophysics Data System (ADS)

    Hancock, W.; Weatherley, D.; Wruck, B.; Chitombo, G. P.

    2012-04-01

    The flow dynamics of granular materials is of broad interest in both the geosciences (e.g. landslides, fault zone evolution, and brecchia pipe formation) and many engineering disciplines (e.g chemical engineering, food sciences, pharmaceuticals and materials science). At the interface between natural and human-induced granular media flow, current underground mass-mining methods are trending towards the induced failure and subsequent gravitational flow of large volumes of broken rock, a method known as cave mining. Cave mining relies upon the undercutting of a large ore body, inducement of fragmentation of the rock and subsequent extraction of ore from below, via hopper-like outlets. Design of such mines currently relies upon a simplified kinematic theory of granular flow in hoppers, known as the ellipsoid theory of mass movement. This theory assumes that the zone of moving material grows as an ellipsoid above the outlet of the silo. The boundary of the movement zone is a shear band and internal to the movement zone, the granular material is assumed to have a uniformly high bulk porosity compared with surrounding stagnant regions. There is however, increasing anecdotal evidence and field measurements suggesting this theory fails to capture the full complexity of granular material flow within cave mines. Given the practical challenges obstructing direct measurement of movement both in laboratory experiments and in-situ, the Discrete Element Method (DEM [1]) is a popular alternative to investigate granular media flow. Small-scale DEM studies (c.f. [3] and references therein) have confirmed that movement within DEM silo flow models matches that predicted by ellipsoid theory, at least for mono-disperse granular material freely outflowing at a constant rate. A major draw-back of these small-scale DEM studies is that the initial bulk porosity of the simulated granular material is significantly higher than that of broken, prismatic rock. In this investigation, more realistic granular material geometries are simulated using the ESyS-Particle [2] DEM simulation software on cluster supercomputers. Individual grains of the granular material are represented as convex polyhedra. Initially the polyhedra are packed in a low bulk porosity configuration prior to commencing silo flow simulations. The resultant flow dynamics are markedly different to that predicted by ellipsoid theory. Initially shearing occurs around the silo outlet however rapidly shear localization in a particular direction dominates other directions, causing preferential movement in that direction. Within the shear band itself, the granular material becomes hgihly dilated however elsewhere the bulk porosity remains low. The low porosity within these regions promotes entrainment whereby large volumes of granular material interlock and begin to rotate and translate as a single rigid body. In some cases, entrainment may result in complete overturning of a large volume of material. The consequences of preferential shear localization and in particular, entrainment, for granular media flow in cave mines and natural settings (such as brecchia pipes) is a topic of ongoing research to be presented at the meeting.

  8. Large Scale Geologic Controls on Hydraulic Stimulation

    NASA Astrophysics Data System (ADS)

    McLennan, J. D.; Bhide, R.

    2014-12-01

    When simulating a hydraulic fracturing, the analyst has historically prescribed a single planar fracture. Originally (in the 1950s through the 1970s) this was necessitated by computational restrictions. In the latter part of the twentieth century, hydraulic fracture simulation evolved to incorporate vertical propagation controlled by modulus, fluid loss, and the minimum principal stress. With improvements in software, computational capacity, and recognition that in-situ discontinuities are relevant, fully three-dimensional hydraulic simulation is now becoming possible. Advances in simulation capabilities enable coupling structural geologic data (three-dimensional representation of stresses, natural fractures, and stratigraphy) with decision making processes for stimulation - volumes, rates, fluid types, completion zones. Without this interaction between simulation capabilities and geological information, low permeability formation exploitation may linger on the fringes of real economic viability. Comparative simulations have been undertaken in varying structural environments where the stress contrast and the frequency of natural discontinuities causes varying patterns of multiple, hydraulically generated or reactivated flow paths. Stress conditions and nature of the discontinuities are selected as variables and are used to simulate how fracturing can vary in different structural regimes. The basis of the simulations is commercial distinct element software (Itasca Corporation's 3DEC).

  9. Imaging cerebral haemorrhage with magnetic induction tomography: numerical modelling.

    PubMed

    Zolgharni, M; Ledger, P D; Armitage, D W; Holder, D S; Griffiths, H

    2009-06-01

    Magnetic induction tomography (MIT) is a new electromagnetic imaging modality which has the potential to image changes in the electrical conductivity of the brain due to different pathologies. In this study the feasibility of detecting haemorrhagic cerebral stroke with a 16-channel MIT system operating at 10 MHz was investigated. The finite-element method combined with a realistic, multi-layer, head model comprising 12 different tissues, was used for the simulations in the commercial FE package, Comsol Multiphysics. The eddy-current problem was solved and the MIT signals computed for strokes of different volumes occurring at different locations in the brain. The results revealed that a large, peripheral stroke (volume 49 cm(3)) produced phase changes that would be detectable with our currently achievable instrumentation phase noise level (17 m degrees ) in 70 (27%) of the 256 exciter/sensor channel combinations. However, reconstructed images showed that a lower noise level than this, of 1 m degrees , was necessary to obtain good visualization of the strokes. The simulated MIT measurements were compared with those from an independent transmission-line-matrix model in order to give confidence in the results.

  10. Numerical Modeling of Gas and Water Flow in Shale Gas Formations with a Focus on the Fate of Hydraulic Fracturing Fluid.

    PubMed

    Edwards, Ryan W J; Doster, Florian; Celia, Michael A; Bandilla, Karl W

    2017-12-05

    Hydraulic fracturing in shale gas formations involves the injection of large volumes of aqueous fluid deep underground. Only a small proportion of the injected water volume is typically recovered, raising concerns that the remaining water may migrate upward and potentially contaminate groundwater aquifers. We implement a numerical model of two-phase water and gas flow in a shale gas formation to test the hypothesis that the remaining water is imbibed into the shale rock by capillary forces and retained there indefinitely. The model includes the essential physics of the system and uses the simplest justifiable geometrical structure. We apply the model to simulate wells from a specific well pad in the Horn River Basin, British Columbia, where there is sufficient available data to build and test the model. Our simulations match the water and gas production data from the wells remarkably closely and show that all the injected water can be accounted for within the shale system, with most imbibed into the shale rock matrix and retained there for the long term.

  11. The Numerical Simulation of the Shock Wave of Coal Gas Explosions in Gas Pipe*

    NASA Astrophysics Data System (ADS)

    Chen, Zhenxing; Hou, Kepeng; Chen, Longwei

    2018-03-01

    For the problem of large deformation and vortex, the method of Euler and Lagrange has both advantage and disadvantage. In this paper we adopt special fuzzy interface method(volume of fluid). Gas satisfies the conditions of conservation equations of mass, momentum, and energy. Based on explosion and three-dimension fluid dynamics theory, using unsteady, compressible, inviscid hydrodynamic equations and state equations, this paper considers pressure gradient’s effects to velocity, mass and energy in Lagrange steps by the finite difference method. To minimize transport errors of material, energy and volume in Finite Difference mesh, it also considers material transport in Euler steps. Programmed with Fortran PowerStation 4.0 and visualized with the software designed independently, we design the numerical simulation of gas explosion with specific pipeline structure, check the key points of the pressure change in the flow field, reproduce the gas explosion in pipeline of shock wave propagation, from the initial development, flame and accelerate the process of shock wave. This offers beneficial reference and experience to coal gas explosion accidents or safety precautions.

  12. Local Multi-Channel RF Surface Coil versus Body RF Coil Transmission for Cardiac Magnetic Resonance at 3 Tesla: Which Configuration Is Winning the Game?

    PubMed Central

    Winter, Lukas; Dieringer, Matthias A.; Els, Antje; Oezerdem, Celal; Rieger, Jan; Kuehne, Andre; Cassara, Antonino M.; Pfeiffer, Harald; Wetterling, Friedrich; Niendorf, Thoralf

    2016-01-01

    Introduction The purpose of this study was to demonstrate the feasibility and efficiency of cardiac MR at 3 Tesla using local four-channel RF coil transmission and benchmark it against large volume body RF coil excitation. Methods Electromagnetic field simulations are conducted to detail RF power deposition, transmission field uniformity and efficiency for local and body RF coil transmission. For both excitation regimes transmission field maps are acquired in a human torso phantom. For each transmission regime flip angle distributions and blood-myocardium contrast are examined in a volunteer study of 12 subjects. The feasibility of the local transceiver RF coil array for cardiac chamber quantification at 3 Tesla is demonstrated. Results Our simulations and experiments demonstrate that cardiac MR at 3 Tesla using four-channel surface RF coil transmission is competitive versus current clinical CMR practice of large volume body RF coil transmission. The efficiency advantage of the 4TX/4RX setup facilitates shorter repetition times governed by local SAR limits versus body RF coil transmission at whole-body SAR limit. No statistically significant difference was found for cardiac chamber quantification derived with body RF coil versus four-channel surface RF coil transmission. Our simulation also show that the body RF coil exceeds local SAR limits by a factor of ~2 when driven at maximum applicable input power to reach the whole-body SAR limit. Conclusion Pursuing local surface RF coil arrays for transmission in cardiac MR is a conceptually appealing alternative to body RF coil transmission, especially for patients with implants. PMID:27598923

  13. The effect of modified Blalock-Taussig shunt size and coarctation severity on coronary perfusion after the Norwood operation.

    PubMed

    Corsini, Chiara; Biglino, Giovanni; Schievano, Silvia; Hsia, Tain-Yen; Migliavacca, Francesco; Pennati, Giancarlo; Taylor, Andrew M

    2014-08-01

    The size of the modified Blalock-Taussig shunt and the additional presence of aortic coarctation can affect the hemodynamics of the Norwood physiology. Multiscale modeling was used to gather insight into the effects of these variables, in particular on coronary perfusion. A model was reconstructed from cardiac magnetic resonance imaging data of a representative patient, and then simplified with computer-aided design software. Changes were systematically imposed to the semi-idealized three-dimensional model, resulting in a family of nine models (3-, 3.5-, and 4-mm shunt diameter; 0%, 60%, and 90% coarctation severity). Each model was coupled to a lumped parameter network representing the remainder of the circulation to run multiscale simulations. Simulations were repeated including the effect of preserved cerebral perfusion. The concomitant presence of a large shunt and tight coarctation was detrimental in terms of coronary perfusion (13.4% maximal reduction, 1.07 versus 0.927 mL/s) and oxygen delivery (29% maximum reduction, 422 versus 300 mL·min(-1)·m(-2)). A variation in the ratio of pulmonary to systemic blood flow from 0.9 to 1.6 also indicated a "stealing" phenomenon to the detriment of the coronary circulation. A difference could be further appreciated in the computational ventricular pressure-volume loops, with augmented systolic pressures and decreased stroke volumes for tighter coarctation. Accounting for constant cerebral perfusion did not produce substantially different results. Multiscale simulations performed in a parametric fashion revealed a reduction in coronary perfusion in the presence of a large modified Blalock-Taussig shunt and severe coarctation in Norwood patients. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  14. Bioremediation of trace cobalt from simulated spent decontamination solutions of nuclear power reactors using E. coli expressing NiCoT genes.

    PubMed

    Raghu, G; Balaji, V; Venkateswaran, G; Rodrigue, A; Maruthi Mohan, P

    2008-12-01

    Removal of radioactive cobalt at trace levels (approximately nM) in the presence of large excess (10(6)-fold) of corrosion product ions of complexed Fe, Cr, and Ni in spent chemical decontamination formulations (simulated effluent) of nuclear reactors is currently done by using synthetic organic ion exchangers. A large volume of solid waste is generated due to the nonspecific nature of ion sorption. Our earlier work using various fungi and bacteria, with the aim of nuclear waste volume reduction, realized up to 30% of Co removal with specific capacities calculated up to 1 microg/g in 6-24 h. In the present study using engineered Escherichia coli expressing NiCoT genes from Rhodopseudomonas palustris CGA009 (RP) and Novosphingobium aromaticivorans F-199 (NA), we report a significant increase in the specific capacity for Co removal (12 microg/g) in 1-h exposure to simulated effluent. About 85% of Co removal was achieved in a two-cycle treatment with the cloned bacteria. Expression of NiCoT genes in the E. coli knockout mutant of NiCoT efflux gene (rcnA) was more efficient as compared to expression in wild-type E. coli MC4100, JM109 and BL21 (DE3) hosts. The viability of the E. coli strains in the formulation as well as at different doses of gamma rays exposure and the effect of gamma dose on their cobalt removal capacity are determined. The potential application scheme of the above process of bioremediation of cobalt from nuclear power reactor chemical decontamination effluents is discussed.

  15. Piscivorous fish exhibit temperature-influenced binge feeding during an annual prey pulse.

    PubMed

    Furey, Nathan B; Hinch, Scott G; Mesa, Matthew G; Beauchamp, David A

    2016-09-01

    Understanding the limits of consumption is important for determining trophic influences on ecosystems and predator adaptations to inconsistent prey availability. Fishes have been observed to consume beyond what is sustainable (i.e. digested on a daily basis), but this phenomenon of hyperphagia (or binge-feeding) is largely overlooked. We expect hyperphagia to be a short-term (1-day) event that is facilitated by gut volume providing capacity to store consumed food during periods of high prey availability to be later digested. We define how temperature, body size and food availability influence the degree of binge-feeding by comparing field observations with laboratory experiments of bull trout (Salvelinus confluentus), a large freshwater piscivore that experiences highly variable prey pulses. We also simulated bull trout consumption and growth during salmon smolt outmigrations under two scenarios: 1) daily consumption being dependent upon bioenergetically sustainable rates and 2) daily consumption being dependent upon available gut volume (i.e. consumption is equal to gut volume when empty and otherwise 'topping off' based on sustainable digestion rates). One-day consumption by laboratory-held bull trout during the first day of feeding experiments after fasting exceeded bioenergetically sustainable rates by 12- to 87-fold at low temperatures (3 °C) and by  ˜1·3-fold at 20 °C. The degree of binge-feeding by bull trout in the field was slightly reduced but largely in agreement with laboratory estimates, especially when prey availability was extremely high [during a sockeye salmon (Oncorhynchus nerka) smolt outmigration and at a counting fence where smolts are funnelled into high densities]. Consumption by bull trout at other settings were lower and more variable, but still regularly hyperphagic. Simulations demonstrated the ability to binge-feed increased cumulative consumption (16-32%) and cumulative growth (19-110%) relative to only feeding at bioenergetically sustainable rates during the  ˜1-month smolt outmigration period. Our results indicate the ability for predators to maximize short-term consumption when prey are available can be extreme and is limited primarily by gut volume, then mediated by temperature; thus, predator-prey relationships may be more dependent upon prey availability than traditional bioenergetic models suggest. Binge-feeding has important implications for energy budgets of consumers as well as acute predation impacts on prey. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.

  16. Piscivorous fish exhibit temperature-influenced binge feeding during an annual prey pulse

    USGS Publications Warehouse

    Furey, Nathan B.; Hinch, Scott G.; Mesa, Matthew G.; Beauchamp, David A.

    2016-01-01

    Understanding the limits of consumption is important for determining trophic influences on ecosystems and predator adaptations to inconsistent prey availability. Fishes have been observed to consume beyond what is sustainable (i.e. digested on a daily basis), but this phenomenon of hyperphagia (or binge-feeding) is largely overlooked. We expect hyperphagia to be a short-term (1-day) event that is facilitated by gut volume providing capacity to store consumed food during periods of high prey availability to be later digested.We define how temperature, body size and food availability influence the degree of binge-feeding by comparing field observations with laboratory experiments of bull trout (Salvelinus confluentus), a large freshwater piscivore that experiences highly variable prey pulses. We also simulated bull trout consumption and growth during salmon smolt outmigrations under two scenarios: 1) daily consumption being dependent upon bioenergetically sustainable rates and 2) daily consumption being dependent upon available gut volume (i.e. consumption is equal to gut volume when empty and otherwise ‘topping off’ based on sustainable digestion rates).One-day consumption by laboratory-held bull trout during the first day of feeding experiments after fasting exceeded bioenergetically sustainable rates by 12- to 87-fold at low temperatures (3 °C) and by  ˜1·3-fold at 20 °C. The degree of binge-feeding by bull trout in the field was slightly reduced but largely in agreement with laboratory estimates, especially when prey availability was extremely high [during a sockeye salmon (Oncorhynchus nerka) smolt outmigration and at a counting fence where smolts are funnelled into high densities]. Consumption by bull trout at other settings were lower and more variable, but still regularly hyperphagic.Simulations demonstrated the ability to binge-feed increased cumulative consumption (16–32%) and cumulative growth (19–110%) relative to only feeding at bioenergetically sustainable rates during the  ˜1-month smolt outmigration period.Our results indicate the ability for predators to maximize short-term consumption when prey are available can be extreme and is limited primarily by gut volume, then mediated by temperature; thus, predator–prey relationships may be more dependent upon prey availability than traditional bioenergetic models suggest. Binge-feeding has important implications for energy budgets of consumers as well as acute predation impacts on prey.

  17. Matter power spectrum and the challenge of percent accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisationmore » techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.« less

  18. Estimating the Dead Space Volume Between a Headform and N95 Filtering Facepiece Respirator Using Microsoft Kinect.

    PubMed

    Xu, Ming; Lei, Zhipeng; Yang, James

    2015-01-01

    N95 filtering facepiece respirator (FFR) dead space is an important factor for respirator design. The dead space refers to the cavity between the internal surface of the FFR and the wearer's facial surface. This article presents a novel method to estimate the dead space volume of FFRs and experimental validation. In this study, six FFRs and five headforms (small, medium, large, long/narrow, and short/wide) are used for various FFR and headform combinations. Microsoft Kinect Sensors (Microsoft Corporation, Redmond, WA) are used to scan the headforms without respirators and then scan the headforms with the FFRs donned. The FFR dead space is formed through geometric modeling software, and finally the volume is obtained through LS-DYNA (Livermore Software Technology Corporation, Livermore, CA). In the experimental validation, water is used to measure the dead space. The simulation and experimental dead space volumes are 107.5-167.5 mL and 98.4-165.7 mL, respectively. Linear regression analysis is conducted to correlate the results from Kinect and water, and R(2) = 0.85.

  19. Three-dimensional long-period groundmotion simulations in the upper Mississippi embayment

    USGS Publications Warehouse

    Macpherson, K.A.; Woolery, E.W.; Wang, Z.; Liu, P.

    2010-01-01

    We employed a 3D velocity model and 3D wave propagation code to simulate long-period ground motions in the upper Mississippi embayment. This region is at risk from large earthquakes in the New Madrid seismic zone (NMSZ) and observational data are sparse, making simulation a valuable tool for predicting the effects of large events. We undertook these simulations to estimate the magnitude of shaking likely to occur and to investigate the influence of the 3D embayment structure and finite-fault mechanics on ground motions. There exist three primary fault zones in the NMSZ, each of which was likely associated with one of the main shocks of the 1811-12 earthquake triplet. For this study, three simulations have been conducted on each major segment, exploring the impact of different epicentral locations and rupture directions on ground motions. The full wave field up to a frequency of 0.5 Hz is computed on a 200 ?? 200 ?? 50-km 3 volume using a staggered-grid finite-difference code. Peak horizontal velocity and bracketed durations were calculated at the free surface. The NMSZ simulations indicate that for the considered bandwidth, finite-fault mechanics such as fault proximity, directivity effect, and slip distribution exert the most control on ground motions. The 3D geologic structure of the upper Mississippi embayment also influences ground motion with indications that amplification is induced by the sharp velocity contrast at the basin edge.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Michael D.; Olsen, Brett N.; Schlesinger, Paul H.

    In mammalian cells cholesterol is essential for membrane function, but in excess can be cytototoxic. The cellular response to acute cholesterol loading involves biophysical-based mechanisms that regulate cholesterol levels, through modulation of the “activity” or accessibility of cholesterol to extra-membrane acceptors. Experiments and united atom (UA) simulations show that at high concentrations of cholesterol, lipid bilayers thin significantly and cholesterol availability to external acceptors increases substantially. Such cholesterol activation is critical to its trafficking within cells. Here we aim to reduce the computational cost to enable simulation of large and complex systems involved in cholesterol regulation, such as those includingmore » oxysterols and cholesterol-sensing proteins. To accomplish this, we have modified the published MARTINI coarse-grained force field to improve its predictions of cholesterol-induced changes in both macroscopic and microscopic properties of membranes. Most notably, MARTINI fails to capture both the (macroscopic) area condensation and membrane thickening seen at less than 30% cholesterol and the thinning seen above 40% cholesterol. The thinning at high concentration is critical to cholesterol activation. Microscopic properties of interest include cholesterol-cholesterol radial distribution functions (RDFs), tilt angle, and accessible surface area. First, we develop an “angle-corrected” model wherein we modify the coarse-grained bond angle potentials based on atomistic simulations. This modification significantly improves prediction of macroscopic properties, most notably the thickening/thinning behavior, and also slightly improves microscopic property prediction relative to MARTINI. Second, we add to the angle correction a “volume correction” by also adjusting phospholipid bond lengths to achieve a more accurate volume per molecule. The angle + volume correction substantially further improves the quantitative agreement of the macroscopic properties (area per molecule and thickness) with united atom simulations. However, this improvement also reduces the accuracy of microscopic predictions like radial distribution functions and cholesterol tilt below that of either MARTINI or the angle-corrected model. Thus, while both of our forcefield corrections improve MARTINI, the combined angle and volume correction should be used for problems involving sterol effects on the overall structure of the membrane, while our angle-corrected model should be used in cases where the properties of individual lipid and sterol models are critically important.« less

  1. Experimental determination of the PTW 60019 microDiamond dosimeter active area and volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marinelli, Marco, E-mail: marco.marinelli@uniroma2

    Purpose: Small field output correction factors have been studied by several research groups for the PTW 60019 microDiamond (MD) dosimeter, by comparing the response of such a device with both reference dosimeters and Monte Carlo simulations. A general good agreement is observed for field sizes down to about 1 cm. However, evident inconsistencies can be noticed when comparing some experimental results and Monte Carlo simulations obtained for smaller irradiation fields. This issue was tentatively attributed by some authors to unintentional large variations of the MD active surface area. The aim of the present study is a nondestructive experimental determination ofmore » the MD active surface area and active volume. Methods: Ten MD dosimeters, one MD prototype, and three synthetic diamond samples were investigated in the present work. 2D maps of the MD response were recorded under scanned soft x-ray microbeam irradiation, leading to an experimental determination of the device active surface area. Profiles of the device responses were measured as well. In order to evaluate the MD active volume, the thickness of the diamond sensing layer was independently evaluated by capacitance measurements and alpha particle detection experiments. The MD sensitivity, measured at the PTW calibration laboratory, was also used to calculate the device active volume thickness. Results: An average active surface area diameter of (2.19 ± 0.02) mm was evaluated by 2D maps and response profiles of all the MDs. Average active volume thicknesses of (1.01 ± 0.13) μm and (0.97 ± 0.14) μm were derived by capacitance and sensitivity measurements, respectively. The obtained results are well in agreement with the nominal values reported in the manufacturer dosimeter specifications. A homogeneous response was observed over the whole device active area. Besides the one from the device active volume, no contributions from other components of the housing nor from encapsulation materials were observed in the 2D response maps. Conclusions: The obtained results demonstrate the high reproducibility of the MD fabrication process. The observed discrepancies among the output correction factors reported by several authors for MD response in very small fields are very unlikely to be ascribed to unintentional variations of the device active surface area and volume. It is the opinion of the authors that the role of the volume averaging as well as of other perturbation effects should be separately investigated instead, both experimentally and by Monte Carlo simulations, in order to better clarify the behaviour of the MD response in very small fields.« less

  2. F-16XL Hybrid Reynolds-Averaged Navier-Stokes/Large Eddy Simulation on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Abdol-Hamid, Khaled S.; Elmiligui, Alaa

    2015-01-01

    This study continues the Cranked Arrow Wing Aerodynamics Program, International (CAWAPI) investigation with the FUN3D and USM3D flow solvers. CAWAPI was established to study the F-16XL, because it provides a unique opportunity to fuse fight test, wind tunnel test, and simulation to understand the aerodynamic features of swept wings. The high-lift performance of the cranked-arrow wing planform is critical for recent and past supersonic transport design concepts. Simulations of the low speed high angle of attack Flight Condition 25 are compared: Detached Eddy Simulation (DES), Modi ed Delayed Detached Eddy Simulation (MDDES), and the Spalart-Allmaras (SA) RANS model. Iso- surfaces of Q criterion show the development of coherent primary and secondary vortices on the upper surface of the wing that spiral, burst, and commingle. SA produces higher pressure peaks nearer to the leading-edge of the wing than flight test measurements. Mean DES and MDDES pressures better predict the flight test measurements, especially on the outer wing section. Vorticies and vortex-vortex interaction impact unsteady surface pressures. USM3D showed many sharp tones in volume points spectra near the wing apex with low broadband noise and FUN3D showed more broadband noise with weaker tones. Spectra of the volume points near the outer wing leading-edge was primarily broadband for both codes. Without unsteady flight measurements, the flight pressure environment can not be used to validate the simulations containing tonal or broadband spectra. Mean forces and moment are very similar between FUN3D models and between USM3D models. Spectra of the unsteady forces and moment are broadband with a few sharp peaks for USM3D.

  3. A virtual simulator designed for collision prevention in proton therapy.

    PubMed

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho

    2015-10-01

    In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.

  4. Dimension of ring polymers in bulk studied by Monte-Carlo simulation and self-consistent theory.

    PubMed

    Suzuki, Jiro; Takano, Atsushi; Deguchi, Tetsuo; Matsushita, Yushu

    2009-10-14

    We studied equilibrium conformations of ring polymers in melt over the wide range of segment number N of up to 4096 with Monte-Carlo simulation and obtained N dependence of radius of gyration of chains R(g). The simulation model used is bond fluctuation model (BFM), where polymer segments bear excluded volume; however, the excluded volume effect vanishes at N-->infinity, and linear polymer can be regarded as an ideal chain. Simulation for ring polymers in melt was performed, and the nu value in the relationship R(g) proportional to N(nu) is decreased gradually with increasing N, and finally it reaches the limiting value, 1/3, in the range of N>or=1536, i.e., R(g) proportional to N(1/3). We confirmed that the simulation result is consistent with that of the self-consistent theory including the topological effect and the osmotic pressure of ring polymers. Moreover, the averaged chain conformation of ring polymers in equilibrium state was given in the BFM. In small N region, the segment density of each molecule near the center of mass of the molecule is decreased with increasing N. In large N region the decrease is suppressed, and the density is found to be kept constant without showing N dependence. This means that ring polymer molecules do not segregate from the other molecules even if ring polymers in melt have the relationship nu=1/3. Considerably smaller dimensions of ring polymers at high molecular weight are due to their inherent nature of having no chain ends, and hence they have less-entangled conformations.

  5. A new plant chamber facility PLUS coupled to the atmospheric simulation chamber SAPHIR

    NASA Astrophysics Data System (ADS)

    Hohaus, T.; Kuhn, U.; Andres, S.; Kaminski, M.; Rohrer, F.; Tillmann, R.; Wahner, A.; Wegener, R.; Yu, Z.; Kiendler-Scharr, A.

    2015-11-01

    A new PLant chamber Unit for Simulation (PLUS) for use with the atmosphere simulation chamber SAPHIR (Simulation of Atmospheric PHotochemistry In a large Reaction Chamber) has been build and characterized at the Forschungszentrum Jülich GmbH, Germany. The PLUS chamber is an environmentally controlled flow through plant chamber. Inside PLUS the natural blend of biogenic emissions of trees are mixed with synthetic air and are transferred to the SAPHIR chamber where the atmospheric chemistry and the impact of biogenic volatile organic compounds (BVOC) can be studied in detail. In PLUS all important enviromental parameters (e.g. temperature, PAR, soil RH etc.) are well-controlled. The gas exchange volume of 9.32 m3 which encloses the stem and the leafes of the plants is constructed such that gases are exposed to FEP Teflon film and other Teflon surfaces only to minimize any potential losses of BVOCs in the chamber. Solar radiation is simulated using 15 LED panels which have an emission strength up to 800 μmol m-2 s-1. Results of the initial characterization experiments are presented in detail. Background concentrations, mixing inside the gas exchange volume, and transfer rate of volatile organic compounds (VOC) through PLUS under different humidity conditions are explored. Typical plant characteristics such as light and temperature dependent BVOC emissions are studied using six Quercus Ilex trees and compared to previous studies. Results of an initial ozonolysis experiment of BVOC emissions from Quercus Ilex at typical atmospheric concentrations inside SAPHIR are presented to demonstrate a typical experimental set up and the utility of the newly added plant chamber.

  6. How well do we know the infaunal biomass of the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger

    2016-03-01

    Benthic infauna comprise a wide range of taxa of varying abundances and sizes, but large infaunal taxa are infrequently recorded in community surveys of the shelf benthos. These larger, but numerically rare, species may contribute disproportionately to biomass, however. We examine the degree to which standard benthic sampling gear and survey design provide an adequate estimate of the biomass of large infauna using the Atlantic surfclam, Spisula solidissima, on the continental shelf off the northeastern coast of the United States as a test organism. We develop a numerical model that simulates standard survey designs, gear types, and sampling densities to evaluate the effectiveness of vertically-dropped sampling gear (e.g., boxcores, grabs) for estimating density of large species. Simulations of randomly distributed clams at a density of 0.5-1 m-2 within an 0.25-km2 domain show that lower sampling densities (1-5 samples per sampling event) resulted in highly inaccurate estimates of clam density with the presence of clams detected in less than 25% of the sampling events. In all cases in which patchiness was present in the simulated clam population, surveys were prone to very large errors (survey availability events) unless a dense (e.g., 100-sample) sampling protocol was imposed. Thus, commercial quantities of surfclams could easily go completely undetected by any standard benthic community survey protocol using vertically-dropped gear. Without recourse to modern high-volume sampling gear capable of sampling many meters at a swath, such as hydraulic dredges, biomass of the continental shelf will be grievously underestimated if large infauna are present even at moderate densities.

  7. Heat transfer measurements for Stirling machine cylinders

    NASA Technical Reports Server (NTRS)

    Kornhauser, Alan A.; Kafka, B. C.; Finkbeiner, D. L.; Cantelmi, F. C.

    1994-01-01

    The primary purpose of this study was to measure the effects of inflow-produced heat turbulence on heat transfer in Stirling machine cylinders. A secondary purpose was to provide new experimental information on heat transfer in gas springs without inflow. The apparatus for the experiment consisted of a varying-volume piston-cylinder space connected to a fixed volume space by an orifice. The orifice size could be varied to adjust the level of inflow-produced turbulence, or the orifice plate could be removed completely so as to merge the two spaces into a single gas spring space. Speed, cycle mean pressure, overall volume ratio, and varying volume space clearance ratio could also be adjusted. Volume, pressure in both spaces, and local heat flux at two locations were measured. The pressure and volume measurements were used to calculate area averaged heat flux, heat transfer hysteresis loss, and other heat transfer-related effects. Experiments in the one space arrangement extended the range of previous gas spring tests to lower volume ratio and higher nondimensional speed. The tests corroborated previous results and showed that analytic models for heat transfer and loss based on volume ratio approaching 1 were valid for volume ratios ranging from 1 to 2, a range covering most gas springs in Stirling machines. Data from experiments in the two space arrangement were first analyzed based on lumping the two spaces together and examining total loss and averaged heat transfer as a function of overall nondimensional parameter. Heat transfer and loss were found to be significantly increased by inflow-produced turbulence. These increases could be modeled by appropriate adjustment of empirical coefficients in an existing semi-analytic model. An attempt was made to use an inverse, parameter optimization procedure to find the heat transfer in each of the two spaces. This procedure was successful in retrieving this information from simulated pressure-volume data with artificially generated noise, but it failed with the actual experimental data. This is evidence that the models used in the parameter optimization procedure (and to generate the simulated data) were not correct. Data from the surface heat flux sensors indicated that the primary shortcoming of these models was that they assumed turbulence levels to be constant over the cycle. Sensor data in the varying volume space showed a large increase in heat flux, probably due to turbulence, during the expansion stroke.

  8. Phase-field simulations of coherent precipitate morphologies and coarsening kinetics

    NASA Astrophysics Data System (ADS)

    Vaithyanathan, Venugopalan

    2002-09-01

    The primary aim of this research is to enhance the fundamental understanding of coherent precipitation reactions in advanced metallic alloys. The emphasis is on a particular class of precipitation reactions which result in ordered intermetallic precipitates embedded in a disordered matrix. These precipitation reactions underlie the development of high-temperature Ni-base superalloys and ultra-light aluminum alloys. Phase-field approach, which has emerged as the method of choice for modeling microstructure evolution, is employed for this research with the focus on factors that control the precipitate morphologies and coarsening kinetics, such as precipitate volume fractions and lattice mismatch between precipitates and matrix. Two types of alloy systems are considered. The first involves L1 2 ordered precipitates in a disordered cubic matrix, in an attempt to model the gamma' precipitates in Ni-base superalloys and delta' precipitates in Al-Li alloys. The effect of volume fraction on coarsening kinetics of gamma' precipitates was investigated using two-dimensional (2D) computer simulations. With increase in volume fraction, larger fractions of precipitates were found to have smaller aspect ratios in the late stages of coarsening, and the precipitate size distributions became wider and more positively skewed. The most interesting result was associated with the effect of volume fraction on the coarsening rate constant. Coarsening rate constant as a function of volume fraction extracted from the cubic growth law of average half-edge length was found to exhibit three distinct regimes: anomalous behavior or decreasing rate constant with volume fraction at small volume fractions ( ≲ 20%), volume fraction independent or constant behavior for intermediate volume fractions (˜20--50%), and the normal behavior or increasing rate constant with volume fraction for large volume fractions ( ≳ 50%). The second alloy system considered was Al-Cu with the focus on understanding precipitation of metastable tetragonal theta'-Al 2Cu in a cubic Al solid solution matrix. In collaboration with Chris Wolverton at Ford Motor Company, a multiscale model, which involves a novel combination of first-principles atomistic calculations with a mesoscale phase-field microstructure model, was developed. Reliable energetics in the form of bulk free energy, interfacial energy and parameters for calculating the elastic energy were obtained using accurate first-principles calculations. (Abstract shortened by UMI.)

  9. Reliable groundwater levels: failures and lessons learned from modeling and monitoring studies

    NASA Astrophysics Data System (ADS)

    Van Lanen, Henny A. J.

    2017-04-01

    Adequate management of groundwater resources requires an a priori assessment of impacts of intended groundwater abstractions. Usually, groundwater flow modeling is used to simulate the influence of the planned abstraction on groundwater levels. Model performance is tested by using observed groundwater levels. Where a multi-aquifer system occurs, groundwater levels in the different aquifers have to be monitored through observation wells with filters at different depths, i.e. above the impermeable clay layer (phreatic water level) and beneath (artesian aquifer level). A reliable artesian level can only be measured if the space between the outer wall of the borehole (vertical narrow shaft) and the observation well is refilled with impermeable material at the correct depth (post-drilling phase) to prevent a vertical hydraulic connection between the artesian and phreatic aquifer. We were involved in improper refilling, which led to impossibility to monitor reliable artesian aquifer levels. At the location of the artesian observation well, a freely overflowing spring was seen, which implied water leakage from the artesian aquifer affected the artesian groundwater level. Careful checking of the monitoring sites in a study area is a prerequisite to use observations for model performance assessment. After model testing the groundwater model is forced with proposed groundwater abstractions (sites, extraction rates). The abstracted groundwater volume is compensated by a reduction of groundwater flow to the drainage network and the model simulates associated groundwater tables. The drawdown of groundwater level is calculated by comparing the simulated groundwater level with and without groundwater abstraction. In lowland areas, such as vast areas of the Netherlands, the groundwater model has to consider a variable drainage network, which means that small streams only carry water during the wet winter season, and run dry during the summer. The main streams drain groundwater throughout the whole year. We simulated groundwater levels with a steady-state groundwater flow model with and without groundwater abstraction for the wet and dry season, i.e. considering a high (all streams included) and low drainage density (only major streams), respectively. Groundwater drawdown maps for the wet and dry season were compiled. Stakeholders (farmers, ecologists) were very concerned about the large drawdowns. After a while and discussions with the Water Supply Company and stakeholders, we realised that we had calculated unrealistic large drawdowns of the phreatic groundwater level for the dry season. We learnt that by applying a steady-state model we did not take into account the large volume of groundwater, which is released from the groundwater storage. The transient groundwater model that we developed then, showed that the volume of groundwater released from the storage per unit of time is significant and that the drawdown of the phreatic groundwater level by the end of the dry period is substantially smaller than the one simulated by the steady-state model. The results of the transient groundwater flow model agreed rather well with the pumping test that lasted the whole dry season.

  10. Power-output regularization in global sound equalization.

    PubMed

    Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn

    2008-01-01

    The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.

  11. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  12. A Marine Aerosol Reference Tank system as a breaking wave analogue

    NASA Astrophysics Data System (ADS)

    Stokes, M. D.; Deane, G. B.; Prather, K.; Bertram, T. H.; Ruppel, M. J.; Ryder, O. S.; Brady, J. M.; Zhao, D.

    2012-12-01

    In order to better understand the processes governing the production of marine aerosols a repeatable, controlled method for their generation is required. The Marine Aerosol Reference Tank (MART) has been designed to closely approximate oceanic conditions by producing an evolving bubble plume and surface foam patch. The tank utilizes an intermittently plunging sheet of water and large volume tank reservoir to simulate turbulence, plume and foam formation, and is monitored volumetrically and acoustically to ensure the repeatability of conditions.

  13. Nuclearite Detection with the ANTARES Neutrino Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavalas, G. E.; Popa, V.

    We discuss the possibility to search for cosmic ray nuclearites using large volume neutrino telescopes. We present a short review of the nuclearite expected properties, focussing on their interaction with the Earth's atmosphere and sea water, and on the specific energy loss mechanisms that could make them detectable in undersea neutrino detectors. After a brief description of the ANTARES telescope, currently under deployment in the Mediterranean Sea, we will discuss its sensitivity to down-going nuclearites, and present some results from the Monte Carlo simulations of such events.

  14. The role of porous matrix in water flow regulation within a karst unsaturated zone: an integrated hydrogeophysical approach

    NASA Astrophysics Data System (ADS)

    Carrière, Simon D.; Chalikakis, Konstantinos; Danquigny, Charles; Davi, Hendrik; Mazzilli, Naomi; Ollivier, Chloé; Emblanch, Christophe

    2016-11-01

    Some portions of the porous rock matrix in the karst unsaturated zone (UZ) can contain large volumes of water and play a major role in water flow regulation. The essential results are presented of a local-scale study conducted in 2011 and 2012 above the Low Noise Underground Laboratory (LSBB - Laboratoire Souterrain à Bas Bruit) at Rustrel, southeastern France. Previous research revealed the geological structure and water-related features of the study site and illustrated the feasibility of specific hydrogeophysical measurements. In this study, the focus is on hydrodynamics at the seasonal and event timescales. Magnetic resonance sounding (MRS) measured a high water content (more than 10 %) in a large volume of rock. This large volume of water cannot be stored in fractures and conduits within the UZ. MRS was also used to measure the seasonal variation of water stored in the karst UZ. A process-based model was developed to simulate the effect of vegetation on groundwater recharge dynamics. In addition, electrical resistivity tomography (ERT) monitoring was used to assess preferential water pathways during a rain event. This study demonstrates the major influence of water flow within the porous rock matrix on the UZ hydrogeological functioning at both the local (LSBB) and regional (Fontaine de Vaucluse) scales. By taking into account the role of the porous matrix in water flow regulation, these findings may significantly improve karst groundwater hydrodynamic modelling, exploitation, and sustainable management.

  15. Evaluation of Large Volume SrI2(Eu) Scintillator Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturm, B W; Cherepy, N J; Drury, O B

    2010-11-18

    There is an ever increasing demand for gamma-ray detectors which can achieve good energy resolution, high detection efficiency, and room-temperature operation. We are working to address each of these requirements through the development of large volume SrI{sub 2}(Eu) scintillator detectors. In this work, we have evaluated a variety of SrI{sub 2} crystals with volumes >10 cm{sup 3}. The goal of this research was to examine the causes of energy resolution degradation for larger detectors and to determine what can be done to mitigate these effects. Testing both packaged and unpackaged detectors, we have consistently achieved better resolution with the packagedmore » detectors. Using a collimated gamma-ray source, it was determined that better energy resolution for the packaged detectors is correlated with better light collection uniformity. A number of packaged detectors were fabricated and tested and the best spectroscopic performance was achieved for a 3% Eu doped crystal with an energy resolution of 2.93% FWHM at 662keV. Simulations of SrI{sub 2}(Eu) crystals were also performed to better understand the light transport physics in scintillators and are reported. This study has important implications for the development of SrI{sub 2}(Eu) detectors for national security purposes.« less

  16. Novel Numerical Approaches to Loop Quantum Cosmology

    NASA Astrophysics Data System (ADS)

    Diener, Peter

    2015-04-01

    Loop Quantum Gravity (LQG) is an (as yet incomplete) approach to the quantization of gravity. When applied to symmetry reduced cosmological spacetimes (Loop Quantum Cosmology or LQC) one of the predictions of the theory is that the Big Bang is replaced by a Big Bounce, i.e. a previously existing contracting universe underwent a bounce at finite volume before becoming our expanding universe. The evolution equations of LQC take the form of difference equations (with the discretization given by the theory) that in the large volume limit can be approximated by partial differential equations (PDEs). In this talk I will first discuss some of the unique challenges encountered when trying to numerically solve these difference equations. I will then present some of the novel approaches that have been employed to overcome the challenges. I will here focus primarily on the Chimera scheme that takes advantage of the fact that the LQC difference equations can be approximated by PDEs in the large volume limit. I will finally also briefly discuss some of the results that have been obtained using these numerical techniques by performing simulations in regions of parameter space that were previously unreachable. This work is supported by a grant from the John Templeton Foundation and by NSF grant PHYS1068743.

  17. Acoustically accessible window determination for ultrasound mediated treatment of glycogen storage disease type Ia patients

    NASA Astrophysics Data System (ADS)

    Wang, Shutao; Raju, Balasundar I.; Leyvi, Evgeniy; Weinstein, David A.; Seip, Ralf

    2012-10-01

    Glycogen storage disease type Ia (GSDIa) is caused by an inherited single-gene defect resulting in an impaired glycogen to glucose conversion pathway. Targeted ultrasound mediated delivery (USMD) of plasmid DNA (pDNA) to liver in conjunction with microbubbles may provide a potential treatment for GSDIa patients. As the success of USMD treatments is largely dependent on the accessibility of the targeted tissue by the focused ultrasound beam, this study presents a quantitative approach to determine the acoustically accessible liver volume in GSDIa patients. Models of focused ultrasound beam profiles for transducers of varying aperture and focal lengths were applied to abdomen models reconstructed from suitable CT and MRI images. Transducer manipulations (simulating USMD treatment procedures) were implemented via transducer translations and rotations with the intent of targeting and exposing the entire liver to ultrasound. Results indicate that acoustically accessible liver volumes can be as large as 50% of the entire liver volume for GSDIa patients and on average 3 times larger compared to a healthy adult group due to GSDIa patients' increased liver size. Detailed descriptions of the evaluation algorithm, transducer-and abdomen models are presented, together with implications for USMD treatments of GSDIa patients and transducer designs for USMD applications.

  18. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    DOE PAGES

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...

    2016-06-09

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  20. A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)

    NASA Astrophysics Data System (ADS)

    Lee, Jin

    2014-05-01

    The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.

  1. Gas hydrate volume estimations on the South Shetland continental margin, Antarctic Peninsula

    USGS Publications Warehouse

    Jin, Y.K.; Lee, M.W.; Kim, Y.; Nam, S.H.; Kim, K.J.

    2003-01-01

    Multi-channel seismic data acquired on the South Shetland margin, northern Antarctic Peninsula, show that Bottom Simulating Reflectors (BSRs) are widespread in the area, implying large volumes of gas hydrates. In order to estimate the volume of gas hydrate in the area, interval velocities were determined using a 1-D velocity inversion method and porosities were deduced from their relationship with sub-bottom depth for terrigenous sediments. Because data such as well logs are not available, we made two baseline models for the velocities and porosities of non-gas hydrate-bearing sediments in the area, considering the velocity jump observed at the shallow sub-bottom depth due to joint contributions of gas hydrate and a shallow unconformity. The difference between the results of the two models is not significant. The parameters used to estimate the total volume of gas hydrate in the study area were 145 km of total length of BSRs identified on seismic profiles, 350 m thickness and 15 km width of gas hydrate-bearing sediments, and 6.3% of the average volume gas hydrate concentration (based on the second baseline model). Assuming that gas hydrates exist only where BSRs are observed, the total volume of gas hydrates along the seismic profiles in the area is about 4.8 ?? 1010 m3 (7.7 ?? 1012 m3 volume of methane at standard temperature and pressure).

  2. Simulation studies of vestibular macular afferent-discharge patterns using a new, quasi-3-D finite volume method

    NASA Technical Reports Server (NTRS)

    Ross, M. D.; Linton, S. W.; Parnas, B. R.

    2000-01-01

    A quasi-three-dimensional finite-volume numerical simulator was developed to study passive voltage spread in vestibular macular afferents. The method, borrowed from computational fluid dynamics, discretizes events transpiring in small volumes over time. The afferent simulated had three calyces with processes. The number of processes and synapses, and direction and timing of synapse activation, were varied. Simultaneous synapse activation resulted in shortest latency, while directional activation (proximal to distal and distal to proximal) yielded most regular discharges. Color-coded visualizations showed that the simulator discretized events and demonstrated that discharge produced a distal spread of voltage from the spike initiator into the ending. The simulations indicate that directional input, morphology, and timing of synapse activation can affect discharge properties, as must also distal spread of voltage from the spike initiator. The finite volume method has generality and can be applied to more complex neurons to explore discrete synaptic effects in four dimensions.

  3. Simulation of a multistage fractured horizontal well in a water-bearing tight fractured gas reservoir under non-Darcy flow

    NASA Astrophysics Data System (ADS)

    Zhang, Rui-Han; Zhang, Lie-Hui; Wang, Rui-He; Zhao, Yu-Long; Huang, Rui

    2018-06-01

    Reservoir development for unconventional resources such as tight gas reservoirs is in increasing demand due to the rapid decline of production in conventional reserves. Compared with conventional reservoirs, fluid flow in water-bearing tight gas reservoirs is subject to more nonlinear multiphase flow and gas slippage in nano/micro matrix pores because of the strong collisions between rock and gas molecules. Economic gas production from tight gas reservoirs depends on extensive application of water-based hydraulic fracturing of horizontal wells, associated with non-Darcy flow at a high flow rate, geomechanical stress sensitivity of un-propped natural fractures, complex flow geometry and multiscale heterogeneity. How to efficiently and accurately predict the production performance of a multistage fractured horizontal well (MFHW) is challenging. In this paper, a novel multicontinuum, multimechanism, two-phase simulator is established based on unstructured meshes and the control volume finite element method to analyze the production performance of MFHWs. The multiple interacting continua model and discrete fracture model are coupled to integrate the unstimulated fractured reservoir, induced fracture networks (stimulated reservoir volumes, SRVs) and irregular discrete hydraulic fractures. Several simulations and sensitivity analyses are performed with the developed simulator for determining the key factors affecting the production performance of MFHWs. Two widely applied fracturing models, classic hydraulic fracturing which generates long double-wing fractures and the volumetric fracturing aimed at creating large SRVs, are compared to identify which of them can make better use of tight gas reserves.

  4. Prior video game utilization is associated with improved performance on a robotic skills simulator.

    PubMed

    Harbin, Andrew C; Nadhan, Kumar S; Mooney, James H; Yu, Daohai; Kaplan, Joshua; McGinley-Hence, Nora; Kim, Andrew; Gu, Yiming; Eun, Daniel D

    2017-09-01

    Laparoscopic surgery and robotic surgery, two forms of minimally invasive surgery (MIS), have recently experienced a large increase in utilization. Prior studies have shown that video game experience (VGE) may be associated with improved laparoscopic surgery skills; however, similar data supporting a link between VGE and proficiency on a robotic skills simulator (RSS) are lacking. The objective of our study is to determine whether volume or timing of VGE had any impact on RSS performance. Pre-clinical medical students completed a comprehensive questionnaire detailing previous VGE across several time periods. Seventy-five subjects were ultimately evaluated in 11 training exercises on the daVinci Si Skills Simulator. RSS skill was measured by overall score, time to completion, economy of motion, average instrument collision, and improvement in Ring Walk 3 score. Using the nonparametric tests and linear regression, these metrics were analyzed for systematic differences between non-users, light, and heavy video game users based on their volume of use in each of the following four time periods: past 3 months, past year, past 3 years, and high school. Univariate analyses revealed significant differences between heavy and non-users in all five performance metrics. These trends disappeared as the period of VGE went further back. Our study showed a positive association between video game experience and robotic skills simulator performance that is stronger for more recent periods of video game use. The findings may have important implications for the evolution of robotic surgery training.

  5. Efficiency study of a big volume well type NaI(Tl) detector by point and voluminous sources and Monte-Carlo simulation.

    PubMed

    Hansman, Jan; Mrdja, Dusan; Slivka, Jaroslav; Krmar, Miodrag; Bikit, Istvan

    2015-05-01

    The activity of environmental samples is usually measured by high resolution HPGe gamma spectrometers. In this work a set-up with a 9in.x9in. NaI well-detector with 3in. thickness and a 3in.×3in. plug detector in a 15-cm-thick lead shielding is considered as an alternative (Hansman, 2014). In spite of its much poorer resolution, it requires shorter measurement times and may possibly give better detection limits. In order to determine the U-238, Th-232, and K-40 content in the samples by this NaI(Tl) detector, the corresponding photopeak efficiencies must be known. These efficiencies can be found for certain source matrix and geometry by Geant4 simulation. We found discrepancy between simulated and experimental efficiencies of 5-50%, which can be mainly due to effects of light collection within the detector volume, an effect which was not taken into account by simulations. The influence of random coincidence summing on detection efficiency for radionuclide activities in the range 130-4000Bq, was negligible. This paper describes also, how the efficiency in the detector depends on the position of the radioactive point source. To avoid large dead time, relatively weak Mn-54, Co-60 and Na-22 point sources of a few kBq were used. Results for single gamma lines and also for coincidence summing gamma lines are presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Multiple anatomy optimization of accumulated dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, W. Tyler, E-mail: watkinswt@virginia.edu; Siebers, Jeffrey V.; Moore, Joseph A.

    Purpose: To investigate the potential advantages of multiple anatomy optimization (MAO) for lung cancer radiation therapy compared to the internal target volume (ITV) approach. Methods: MAO aims to optimize a single fluence to be delivered under free-breathing conditions such that the accumulated dose meets the plan objectives, where accumulated dose is defined as the sum of deformably mapped doses computed on each phase of a single four dimensional computed tomography (4DCT) dataset. Phantom and patient simulation studies were carried out to investigate potential advantages of MAO compared to ITV planning. Through simulated delivery of the ITV- and MAO-plans, target dosemore » variations were also investigated. Results: By optimizing the accumulated dose, MAO shows the potential to ensure dose to the moving target meets plan objectives while simultaneously reducing dose to organs at risk (OARs) compared with ITV planning. While consistently superior to the ITV approach, MAO resulted in equivalent OAR dosimetry at planning objective dose levels to within 2% volume in 14/30 plans and to within 3% volume in 19/30 plans for each lung V20, esophagus V25, and heart V30. Despite large variations in per-fraction respiratory phase weights in simulated deliveries at high dose rates (e.g., treating 4/10 phases during single fraction beams) the cumulative clinical target volume (CTV) dose after 30 fractions and per-fraction dose were constant independent of planning technique. In one case considered, however, per-phase CTV dose varied from 74% to 117% of prescription implying the level of ITV-dose heterogeneity may not be appropriate with conventional, free-breathing delivery. Conclusions: MAO incorporates 4DCT information in an optimized dose distribution and can achieve a superior plan in terms of accumulated dose to the moving target and OAR sparing compared to ITV-plans. An appropriate level of dose heterogeneity in MAO plans must be further investigated.« less

  7. Multiline 3D beamforming using micro-beamformed datasets for pediatric transesophageal echocardiography

    NASA Astrophysics Data System (ADS)

    Bera, D.; Raghunathan, S. B.; Chen, C.; Chen, Z.; Pertijs, M. A. P.; Verweij, M. D.; Daeichin, V.; Vos, H. J.; van der Steen, A. F. W.; de Jong, N.; Bosch, J. G.

    2018-04-01

    Until now, no matrix transducer has been realized for 3D transesophageal echocardiography (TEE) in pediatric patients. In 3D TEE with a matrix transducer, the biggest challenges are to connect a large number of elements to a standard ultrasound system, and to achieve a high volume rate (>200 Hz). To address these issues, we have recently developed a prototype miniaturized matrix transducer for pediatric patients with micro-beamforming and a small central transmitter. In this paper we propose two multiline parallel 3D beamforming techniques (µBF25 and µBF169) using the micro-beamformed datasets from 25 and 169 transmit events to achieve volume rates of 300 Hz and 44 Hz, respectively. Both the realizations use angle-weighted combination of the neighboring overlapping sub-volumes to avoid artifacts due to sharp intensity changes introduced by parallel beamforming. In simulation, the image quality in terms of the width of the point spread function (PSF), lateral shift invariance and mean clutter level for volumes produced by µBF25 and µBF169 are similar to the idealized beamforming using a conventional single-line acquisition with a fully-sampled matrix transducer (FS4k, 4225 transmit events). For completeness, we also investigated a 9 transmit-scheme (3  ×  3) that allows even higher frame rates but found worse B-mode image quality with our probe. The simulations were experimentally verified by acquiring the µBF datasets from the prototype using a Verasonics V1 research ultrasound system. For both µBF169 and µBF25, the experimental PSFs were similar to the simulated PSFs, but in the experimental PSFs, the clutter level was ~10 dB higher. Results indicate that the proposed multiline 3D beamforming techniques with the prototype matrix transducer are promising candidates for real-time pediatric 3D TEE.

  8. Design and optimization of hot-filling pasteurization conditions: Cupuaçu (Theobroma grandiflorum) fruit pulp case study.

    PubMed

    Silva, Filipa V M; Martins, Rui C; Silva, Cristina L M

    2003-01-01

    Cupuaçu (Theobroma grandiflorum) is an Amazonian tropical fruit with a great economic potential. Pasteurization, by a hot-filling technique, was suggested for the preservation of this fruit pulp at room temperature. The process was implemented with local communities in Brazil. The process was modeled, and a computer program was written in Turbo Pascal. The relative importance among the pasteurization process variables (initial product temperature, heating rate, holding temperature and time, container volume and shape, cooling medium type and temperature) on the microbial target and quality was investigated, by performing simulations according to a screening factorial design. Afterward, simulations of the different processing conditions were carried out. The holding temperature (T(F)) and time (t(hold)) affected pasteurization value (P), and the container volume (V) influenced largely the quality parameters. The process was optimized for retail (1 L) and industrial (100 L) size containers, by maximizing volume average quality in terms of color lightness and sensory "fresh notes" and minimizing volume average total color difference and sensory "cooked notes". Equivalent processes were designed and simulated (P(91)( degrees )(C) = 4.6 min on Alicyclobacillus acidoterrestris spores) and final quality (color, flavor, and aroma attributes) was evaluated. Color was slightly affected by the pasteurization processes, and few differences were observed between the six equivalent treatments designed (T(F) between 80 and 97 degrees C). T(F) >/= 91 degrees C minimized "cooked notes" and maximized "fresh notes" of cupuaçu pulp aroma and flavor for 1 L container. Concerning the 100 L size, the "cooked notes" development can be minimized with T(F) >/= 91 degrees C, but overall the quality was greatly degraded as a result of the long cooling times. A more efficient method to speed up the cooling phase was recommended, especially for the industrial size of containers.

  9. MO-F-CAMPUS-T-05: Correct Or Not to Correct for Rotational Patient Set-Up Errors in Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briscoe, M; Ploquin, N; Voroney, JP

    2015-06-15

    Purpose: To quantify the effect of patient rotation in stereotactic radiation therapy and establish a threshold where rotational patient set-up errors have a significant impact on target coverage. Methods: To simulate rotational patient set-up errors, a Matlab code was created to rotate the patient dose distribution around the treatment isocentre, located centrally in the lesion, while keeping the structure contours in the original locations on the CT and MRI. Rotations of 1°, 3°, and 5° for each of the pitch, roll, and yaw, as well as simultaneous rotations of 1°, 3°, and 5° around all three axes were applied tomore » two types of brain lesions: brain metastasis and acoustic neuroma. In order to analyze multiple tumour shapes, these plans included small spherical (metastasis), elliptical (acoustic neuroma), and large irregular (metastasis) tumour structures. Dose-volume histograms and planning target volumes were compared between the planned patient positions and those with simulated rotational set-up errors. The RTOG conformity index for patient rotation was also investigated. Results: Examining the tumour volumes that received 80% of the prescription dose in the planned and rotated patient positions showed decreases in prescription dose coverage of up to 2.3%. Conformity indices for treatments with simulated rotational errors showed decreases of up to 3% compared to the original plan. For irregular lesions, degradation of 1% of the target coverage can be seen for rotations as low as 3°. Conclusions: This data shows that for elliptical or spherical targets, rotational patient set-up errors less than 3° around any or all axes do not have a significant impact on the dose delivered to the target volume or the conformity index of the plan. However the same rotational errors would have an impact on plans for irregular tumours.« less

  10. Influence of Gravity on Blood Volume and Flow Distribution

    NASA Technical Reports Server (NTRS)

    Pendergast, D.; Olszowka, A.; Bednarczyk, E.; Shykoff, B.; Farhi, L.

    1999-01-01

    In our previous experiments during NASA Shuttle flights SLS 1 and 2 (9-15 days) and EUROMIR flights (30-90 days) we observed that pulmonary blood flow (cardiac output) was elevated initially, and surprisingly remained elevated for the duration of the flights. Stroke volume increased initially and then decreased, but was still above 1 Gz values. As venous return was constant, the changes in SV were secondary to modulation of heart rate. Mean blood pressure was at or slightly below 1 Gz levels in space, indicating a decrease in total peripheral resistance. It has been suggested that plasma volume is reduced in space, however cardiac output/venous return do not return to 1 Gz levels over the duration of flight. In spite of the increased cardiac output, central venous pressure was not elevated in space. These data suggest that there is a change in the basic relationship between cardiac output and central venous pressure, a persistent "hyperperfusion" and a re-distribution of blood flow and volume during space flight. Increased pulmonary blood flow has been reported to increase diffusing capacity in space, presumably due to the improved homogeneity of ventilation and perfusion. Other studies have suggested that ventilation may be independent of gravity, and perfusion may not be gravity- dependent. No data for the distribution of pulmonary blood volume were available for flight or simulated microgravity. Recent studies have suggested that the pulmonary vascular tree is influenced by sympathetic tone in a manner similar to that of the systemic system. This implies that the pulmonary circulation is dilated during microgravity and that the distribution of blood flow and volume may be influenced more by vascular control than by gravity. The cerebral circulation is influenced by sympathetic tone similarly to that of the systemic and pulmonary circulations; however its effects are modulated by cerebral autoregulation. Thus it is difficult to predict if cerebral perfusion is increased and if there is edema in space. Anecdotal evidence suggests there may be cerebral edema early in flight. Cerebral artery velocity has been shown to be elevated in simulated microgravity. The elevated cerebral artery velocity during simulated microgravity may reflect vasoconstriction of the arteries and not increased cerebral blood flow. The purpose of our investigations was to evaluate the effects of alterations in simulated gravity (+/-), resulting in changes in cardiac output (+/-), and on the blood flow and volume distribution in the lung and brain of human subjects. The first hypothesis of these studies was that blood flow and volume would be affected by gravity, but their distribution in the lung would be independent of gravity and due to vasoactivity changing vascular resistance in lung vessels. The vasodilitation of the lung vasculature (lower resistance) along with increased "compliance" of the heart could account for the absence of increased central venous pressure in microgravity. Secondly, we postulate that cerebral blood velocity is increased in microgravity due to large artery vasoconstriction, but that cerebral blood flow would be reduced due to autoregulation.

  11. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  12. Modeling Supernova Shocks with Intense Lasers.

    NASA Astrophysics Data System (ADS)

    Blue, Brent

    2006-04-01

    Large-scale directional outflows of supersonic plasma are ubiquitous phenomena in astrophysics, with specific application to supernovae. The traditional approach to understanding such phenomena is through theoretical analysis and numerical simulations. However, theoretical analysis might not capture all the relevant physics and numerical simulations have limited resolution and fail to scale correctly in Reynolds number and perhaps other key dimensionless parameters. Recent advances in high energy density physics using large inertial confinement fusion devices now allow controlled laboratory experiments on macroscopic volumes of plasma of direct relevance to astrophysics. This talk will present an overview of these facilities as well as results from current laboratory astrophysics experiments designed to study hydrodynamic jets and Rayleigh-Taylor mixing. This work is performed under the auspices of the U. S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48, Los Alamos National Laboratory under Contract No. W-7405-ENG-36, and the Laboratory for Laser Energetics under Contract No. DE-FC03-92SF19460.

  13. Turbulent thermal superstructures in Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Stevens, Richard J. A. M.; Blass, Alexander; Zhu, Xiaojue; Verzicco, Roberto; Lohse, Detlef

    2018-04-01

    We report the observation of superstructures, i.e., very large-scale and long living coherent structures in highly turbulent Rayleigh-Bénard convection up to Rayleigh Ra=109 . We perform direct numerical simulations in horizontally periodic domains with aspect ratios up to Γ =128 . In the considered Ra number regime the thermal superstructures have a horizontal extend of six to seven times the height of the domain and their size is independent of Ra. Many laboratory experiments and numerical simulations have focused on small aspect ratio cells in order to achieve the highest possible Ra. However, here we show that for very high Ra integral quantities such as the Nusselt number and volume averaged Reynolds number only converge to the large aspect ratio limit around Γ ≈4 , while horizontally averaged statistics such as standard deviation and kurtosis converge around Γ ≈8 , the integral scale converges around Γ ≈32 , and the peak position of the temperature variance and turbulent kinetic energy spectra only converge around Γ ≈64 .

  14. Compaction of granular materials composed of deformable particles

    NASA Astrophysics Data System (ADS)

    Nguyen, Thanh Hai; Nezamabadi, Saeid; Delenne, Jean-Yves; Radjai, Farhang

    2017-06-01

    In soft particle materials such as metallic powders the particles can undergo large deformations without rupture. The large elastic or plastic deformations of the particles are expected to strongly affect the mechanical properties of these materials compared to hard particle materials more often considered in research on granular materials. Herein, two numerical approaches are proposed for the simulation of soft granular systems: (i) an implicit formulation of the Material Point Method (MPM) combined with the Contact Dynamics (CD) method to deal with contact interactions, and (i) Bonded Particle Model (BPM), in which each deformable particle is modeled as an aggregate of rigid primary particles using the CD method. These two approaches allow us to simulate the compaction of an assembly of elastic or plastic particles. By analyzing the uniaxial compaction of 2D soft particle packings, we investigate the effects of particle shape change on the stress-strain relationship and volume change behavior as well as the evolution of the microstructure.

  15. Large-eddy simulation of turbulent cavitating flow in a micro channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Hickel, Stefan; Schmidt, Steffen J.

    2014-08-15

    Large-eddy simulations (LES) of cavitating flow of a Diesel-fuel-like fluid in a generic throttle geometry are presented. Two-phase regions are modeled by a parameter-free thermodynamic equilibrium mixture model, and compressibility of the liquid and the liquid-vapor mixture is taken into account. The Adaptive Local Deconvolution Method (ALDM), adapted for cavitating flows, is employed for discretizing the convective terms of the Navier-Stokes equations for the homogeneous mixture. ALDM is a finite-volume-based implicit LES approach that merges physically motivated turbulence modeling and numerical discretization. Validation of the numerical method is performed for a cavitating turbulent mixing layer. Comparisons with experimental data ofmore » the throttle flow at two different operating conditions are presented. The LES with the employed cavitation modeling predicts relevant flow and cavitation features accurately within the uncertainty range of the experiment. The turbulence structure of the flow is further analyzed with an emphasis on the interaction between cavitation and coherent motion, and on the statistically averaged-flow evolution.« less

  16. Large Area MEMS Based Ultrasound Device for Cancer Detection.

    PubMed

    Wodnicki, Robert; Thomenius, Kai; Hooi, Fong Ming; Sinha, Sumedha P; Carson, Paul L; Lin, Der-Song; Zhuang, Xuefeng; Khuri-Yakub, Pierre; Woychik, Charles

    2011-08-21

    We present image results obtained using a prototype ultrasound array which demonstrates the fundamental architecture for a large area MEMS based ultrasound device for detection of breast cancer. The prototype array consists of a tiling of capacitive Micro-Machined Ultrasound Transducers (cMUTs) which have been flip-chip attached to a rigid organic substrate. The pitch on the cMUT elements is 185 um and the operating frequency is nominally 9 MHz. The spatial resolution of the new probe is comparable to production PZT probes, however the sensitivity is reduced by conditions that should be correctable. Simulated opposed-view image registration and Speed of Sound volume reconstruction results for ultrasound in the mammographic geometry are also presented.

  17. Origin of the cosmic network in {Lambda}CDM: Nature vs nurture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shandarin, Sergei; Habib, Salman; Heitmann, Katrin

    The large-scale structure of the Universe, as traced by the distribution of galaxies, is now being revealed by large-volume cosmological surveys. The structure is characterized by galaxies distributed along filaments, the filaments connecting in turn to form a percolating network. Our objective here is to quantitatively specify the underlying mechanisms that drive the formation of the cosmic network: By combining percolation-based analyses with N-body simulations of gravitational structure formation, we elucidate how the network has its origin in the properties of the initial density field (nature) and how its contrast is then amplified by the nonlinear mapping induced by themore » gravitational instability (nurture).« less

  18. Note: Nonpolar solute partial molar volume response to attractive interactions with water.

    PubMed

    Williams, Steven M; Ashbaugh, Henry S

    2014-01-07

    The impact of attractive interactions on the partial molar volumes of methane-like solutes in water is characterized using molecular simulations. Attractions account for a significant 20% volume drop between a repulsive Weeks-Chandler-Andersen and full Lennard-Jones description of methane interactions. The response of the volume to interaction perturbations is characterized by linear fits to our simulations and a rigorous statistical thermodynamic expression for the derivative of the volume to increasing attractions. While a weak non-linear response is observed, an average effective slope accurately captures the volume decrease. This response, however, is anticipated to become more non-linear with increasing solute size.

  19. Note: Nonpolar solute partial molar volume response to attractive interactions with water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Steven M.; Ashbaugh, Henry S., E-mail: hanka@tulane.edu

    2014-01-07

    The impact of attractive interactions on the partial molar volumes of methane-like solutes in water is characterized using molecular simulations. Attractions account for a significant 20% volume drop between a repulsive Weeks-Chandler-Andersen and full Lennard-Jones description of methane interactions. The response of the volume to interaction perturbations is characterized by linear fits to our simulations and a rigorous statistical thermodynamic expression for the derivative of the volume to increasing attractions. While a weak non-linear response is observed, an average effective slope accurately captures the volume decrease. This response, however, is anticipated to become more non-linear with increasing solute size.

  20. Evidence for X(3872) from DD* scattering on the lattice.

    PubMed

    Prelovsek, Sasa; Leskovec, Luka

    2013-11-08

    A candidate for the charmonium(like) state X(3872) is found 11±7 MeV below the DD* threshold using dynamical N(f)=2 lattice simulation with J(PC)=1(++) and I=0. This is the first lattice simulation that establishes a candidate for X(3872) in addition to the nearby scattering states DD* and J/ψω, which inevitably have to be present in dynamical QCD. We extract large and negative DD* scattering length a(0)(DD*)=-1.7±0.4 fm and the effective range r(0)(DD*)=0.5±0.1 fm, but their reliable determination will have to wait for a simulation on a larger volume. In I=1 channel, only the DD* and J/ψρ scattering states are found and no candidate for X(3872). This is in agreement with the interpretation that X(3872) is dominantly I=0, while its small I=1 component arises solely from the isospin breaking and is therefore absent in our simulation with m(u)=m(d).

  1. A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data

    DOE PAGES

    Fan, Ya Ju; Kamath, Chandrika

    2016-09-01

    The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less

  2. A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Ya Ju; Kamath, Chandrika

    The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less

  3. Confidence intervals for differences between volumes under receiver operating characteristic surfaces (VUS) and generalized Youden indices (GYIs).

    PubMed

    Yin, Jingjing; Nakas, Christos T; Tian, Lili; Reiser, Benjamin

    2018-03-01

    This article explores both existing and new methods for the construction of confidence intervals for differences of indices of diagnostic accuracy of competing pairs of biomarkers in three-class classification problems and fills the methodological gaps for both parametric and non-parametric approaches in the receiver operating characteristic surface framework. The most widely used such indices are the volume under the receiver operating characteristic surface and the generalized Youden index. We describe implementation of all methods and offer insight regarding the appropriateness of their use through a large simulation study with different distributional and sample size scenarios. Methods are illustrated using data from the Alzheimer's Disease Neuroimaging Initiative study, where assessment of cognitive function naturally results in a three-class classification setting.

  4. Monte Carlo simulations of flexible polyanions complexing with whey proteins at their isoelectric point.

    PubMed

    de Vries, R

    2004-02-15

    Electrostatic complexation of flexible polyanions with the whey proteins alpha-lactalbumin and beta-lactoglobulin is studied using Monte Carlo simulations. The proteins are considered at their respective isoelectric points. Discrete charges on the model polyelectrolytes and proteins interact through Debye-Huckel potentials. Protein excluded volume is taken into account through a coarse-grained model of the protein shape. Consistent with experimental results, it is found that alpha-lactalbumin complexes much more strongly than beta-lactoglobulin. For alpha-lactalbumin, strong complexation is due to localized binding to a single large positive "charge patch," whereas for beta-lactoglobulin, weak complexation is due to diffuse binding to multiple smaller charge patches. Copyright 2004 American Institute of Physics

  5. Applicability of effective fragment potential version 2 - Molecular dynamics (EFP2-MD) simulations for predicting excess properties of mixed solvents

    NASA Astrophysics Data System (ADS)

    Kuroki, Nahoko; Mori, Hirotoshi

    2018-02-01

    Effective fragment potential version 2 - molecular dynamics (EFP2-MD) simulations, where the EFP2 is a polarizable force field based on ab initio electronic structure calculations were applied to water-methanol binary mixture. Comparing EFP2s defined with (aug-)cc-pVXZ (X = D,T) basis sets, it was found that large sets are necessary to generate sufficiently accurate EFP2 for predicting mixture properties. It was shown that EFP2-MD could predict the excess molar volume. Since the computational cost of EFP2-MD are far less than ab initio MD, the results presented herein demonstrate that EFP2-MD is promising for predicting physicochemical properties of novel mixed solvents.

  6. Technical support package: Large, easily deployable structures. NASA Tech Briefs, Fall 1982, volume 7, no. 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Design and test data for packaging, deploying, and assembling structures for near term space platform systems, were provided by testing light type hardware in the Neutral Buoyancy Simulator. An optimum or near optimum structural configuration for varying degrees of deployment utilizing different levels of EVA and RMS was achieved. The design of joints and connectors and their lock/release mechanisms were refined to improve performance and operational convenience. The incorporation of utilities into structural modules to determine their effects on packaging and deployment was evaluated. By simulation tests, data was obtained for stowage, deployment, and assembly of the final structural system design to determine construction timelines, and evaluate system functioning and techniques.

  7. Studying Turbulence Using Numerical Simulation Databases - IX: Proceedings of the 2002 Summer Program

    NASA Technical Reports Server (NTRS)

    Bradshaw, Peter (Editor); Rogers, Michael M. (Technical Monitor)

    2002-01-01

    The ninth Summer Program of the Center for Turbulence Research was held during the period July 29th - August 23rd, 2002. The increase in number of participants, noted in the Preface to the Proceedings of the 2000 Program, continues: this year there were 50 participants from ten countries, and 30 hosts from Stanford and NASA-Ames. This Proceedings volume contains 32 papers that span a wide range of topics and an enormous range of physical scales. The papers have been divided into seven groups: Acoustics, RANS modeling, Combustion, Large-eddy simulation (LES), LES Numerics, Stratified Flows, and Fundamentals, In several cases, a paper could have fitted in more than one group so the classification is somewhat arbitrary.

  8. Low-Dissipation Advection Schemes Designed for Large Eddy Simulations of Hypersonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    White, Jeffrey A.; Baurle, Robert A.; Fisher, Travis C.; Quinlan, Jesse R.; Black, William S.

    2012-01-01

    The 2nd-order upwind inviscid flux scheme implemented in the multi-block, structured grid, cell centered, finite volume, high-speed reacting flow code VULCAN has been modified to reduce numerical dissipation. This modification was motivated by the desire to improve the codes ability to perform large eddy simulations. The reduction in dissipation was accomplished through a hybridization of non-dissipative and dissipative discontinuity-capturing advection schemes that reduces numerical dissipation while maintaining the ability to capture shocks. A methodology for constructing hybrid-advection schemes that blends nondissipative fluxes consisting of linear combinations of divergence and product rule forms discretized using 4th-order symmetric operators, with dissipative, 3rd or 4th-order reconstruction based upwind flux schemes was developed and implemented. A series of benchmark problems with increasing spatial and fluid dynamical complexity were utilized to examine the ability of the candidate schemes to resolve and propagate structures typical of turbulent flow, their discontinuity capturing capability and their robustness. A realistic geometry typical of a high-speed propulsion system flowpath was computed using the most promising of the examined schemes and was compared with available experimental data to demonstrate simulation fidelity.

  9. Atomisation and droplet formation mechanisms in a model two-phase mixing layer

    NASA Astrophysics Data System (ADS)

    Zaleski, Stephane; Ling, Yue; Fuster, Daniel; Tryggvason, Gretar

    2017-11-01

    We study atomization in a turbulent two-phase mixing layer inspired by the Grenoble air-water experiments. A planar gas jet of large velocity is emitted on top of a planar liquid jet of smaller velocity. The density ratio and momentum ratios are both set at 20 in the numerical simulation in order to ease the simulation. We use a Volume-Of-Fluid method with good parallelisation properties, implemented in our code http://parissimulator.sf.net. Our simulations show two distinct droplet formation mechanisms, one in which thin liquid sheets are punctured to form rapidly expanding holes and the other in which ligaments of irregular shape form and breakup in a manner similar but not identical to jets in Rayleigh-Plateau-Savart instabilities. Observed distributions of particle sizes are extracted for a sequence of ever more refined grids, the largest grid containing approximately eight billion points. Although their accuracy is limited at small sizes by the grid resolution and at large size by statistical effects, the distributions overlap in the central region. The observed distributions are much closer to log normal distributions than to gamma distributions as is also the case for experiments.

  10. A low-dissipation monotonicity-preserving scheme for turbulent flows in hydraulic turbines

    NASA Astrophysics Data System (ADS)

    Yang, L.; Nadarajah, S.

    2016-11-01

    The objective of this work is to improve the inherent dissipation of the numerical schemes under the framework of a Reynolds-averaged Navier-Stokes (RANS) simulation. The governing equations are solved by the finite volume method with the k-ω SST turbulence model. Instead of the van Albada limiter, a novel eddy-preserving limiter is employed in the MUSCL reconstructions to minimize the dissipation of the vortex. The eddy-preserving procedure inactivates the van Albada limiter in the swirl plane and reduces the artificial dissipation to better preserve vortical flow structures. Steady and unsteady simulations of turbulent flows in a straight channel and a straight asymmetric diffuser are demonstrated. Profiles of velocity, Reynolds shear stress and turbulent kinetic energy are presented and compared against large eddy simulation (LES) and/or experimental data. Finally, comparisons are made to demonstrate the capability of the eddy-preserving limiter scheme.

  11. Simulations of stretching a flexible polyelectrolyte with varying charge separation

    DOE PAGES

    Stevens, Mark J.; Saleh, Omar A.

    2016-07-22

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  12. The numerical simulation tool for the MAORY multiconjugate adaptive optics system

    NASA Astrophysics Data System (ADS)

    Arcidiacono, C.; Schreiber, L.; Bregoli, G.; Diolaiti, E.; Foppiani, I.; Agapito, G.; Puglisi, A.; Xompero, M.; Oberti, S.; Cosentino, G.; Lombini, M.; Butler, R. C.; Ciliegi, P.; Cortecchia, F.; Patti, M.; Esposito, S.; Feautrier, P.

    2016-07-01

    The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is an hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implement the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and use libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.

  13. Fully Implict Magneto-hydrodynamics Simulations of Coaxial Plasma Accelerators

    DOE PAGES

    Subramaniam, Vivek; Raja, Laxminarayan L.

    2017-01-05

    The resistive Magneto-Hydrodynamic (MHD) model describes the behavior of a strongly ionized plasma in the presence of external electric and magnetic fields. We developed a fully implicit MHD simulation tool to solve the resistive MHD governing equations in the context of a cell-centered finite-volume scheme. The primary objective of this study is to use the fully-implicit algorithm to obtain insights into the plasma acceleration and jet formation processes in Coaxial Plasma accelerators; electromagnetic acceleration devices that utilize self-induced magnetic fields to accelerate thermal plasmas to large velocities. We also carry out plasma-surface simulations in order to study the impact interactionsmore » when these high velocity plasma jets impinge on target material surfaces. Scaling studies are carried out to establish some basic functional relationships between the target-stagnation conditions and the current discharged between the coaxial electrodes.« less

  14. Numerical simulation of transient hypervelocity flow in an expansion tube

    NASA Technical Reports Server (NTRS)

    Jacobs, P. A.

    1992-01-01

    Several numerical simulations of the transient flow of helium in an expansion tube are presented in an effort to identify some of the basic mechanisms which cause the noisy test flows seen in experiments. The calculations were performed with an axisymmetric Navier-Stokes code based on a finite volume formulation and upwinding techniques. Although laminar flow and ideal bursting of the diaphragms was assumed, the simulations showed some of the important features seen in experiments. In particular, the discontinuity in tube diameter of the primary diaphragm station introduced a transverse perturbation to the expanding driver gas and this perturbation was seen to propagate into the test gas under some flow conditions. The disturbances seen in the test flow can be characterized as either small amplitude, low frequency noise possibly introduced during shock compression or large amplitude, high frequency noise associated with the passage of the reflected head of the unsteady expansion.

  15. Numerical simulation of transient hypervelocity flow in an expansion tube

    NASA Technical Reports Server (NTRS)

    Jacobs, P. A.

    1992-01-01

    Several numerical simulations of the transient flow of helium in an expansion tube are presented. The aim of the exercise is to provide further information on the operational problems of the NASA Langley expansion tube. The calculations were performed with an axisymmetric Navier-Stokes code based on a finite-volume formulation and upwinding techniques. Although laminar flow and ideal bursting of the diaphragms was assumed, the simulations showed some of the important features seen in the experiments. In particular, the discontinuity in the tube diameter at the primary diaphragm station introduced a transverse perturbation to the expanding driver gas, and this perturbation was seen to propagate into the test gas under some flow conditions. The disturbances seen in the test flow can be characterized as either 'small-amplitude' noise possibly introduced during shock compression or 'large-amplitude' noise associated with the passage of the reflected head of the unsteady expansion.

  16. Lung volumes, pulmonary ventilation, and hypoxia following rapid decompression to 60,000 ft (18,288 m).

    PubMed

    Connolly, Desmond M; D'Oyly, Timothy J; McGown, Amanda S; Lee, Vivienne M

    2013-06-01

    Rapid decompressions (RD) to 60,000 ft (18,288 m) were undertaken by six subjects to provide evidence of satisfactory performance of a contemporary, partial pressure assembly life support system for the purposes of flight clearance. A total of 12 3-s RDs were conducted with subjects breathing 56% oxygen (balance nitrogen) at the base (simulated cabin) altitude of 22,500 ft (6858 m), switching to 100% oxygen under 72 mmHg (9.6 kPa) of positive pressure at the final (simulated aircraft) altitude. Respiratory pressures, flows, and gas compositions were monitored continuously throughout. All RDs were completed safely, but one subject experienced significant hypoxia during the minute at final altitude, associated with severe hemoglobin desaturation to a low of 53%. Accurate data on subjects' lung volumes were obtained and individual responses post-RD were reviewed in relation to patterns of pulmonary ventilation. The occurrence of severe hypoxia is explained by hypoventilation in conjunction with unusually large lung volumes (total lung capacity 10.18 L). Subjects' lung volumes and patterns of pulmonary ventilation are critical, but idiosyncratic, determinants of alveolar oxygenation and severity of hypoxia following RD to 60,000 ft (18,288 m). At such extreme altitudes even vaporization of water condensate in the oxygen mask may compromise oxygen delivery. An altitude ceiling of 60,000 ft (18,288 m) is the likely threshold for reliable protection using partial pressure assemblies and aircrew should be instructed to take two deep 'clearing' breaths immediately following RD at such extreme pressure breathing altitudes.

  17. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  18. Decision support system in an international-voice-services business company

    NASA Astrophysics Data System (ADS)

    Hadianti, R.; Uttunggadewa, S.; Syamsuddin, M.; Soewono, E.

    2017-01-01

    We consider a problem facing by an international telecommunication services company in maximizing its profit. From voice services by controlling cost and business partnership. The competitiveness in this industry is very high, so that any efficiency from controlling cost and business partnership can help the company to survive in the very high competitiveness situation. The company trades voice traffic with a large number of business partners. There are four trading schemes that can be chosen by this company, namely, flat rate, class tiering, volume commitment, and revenue capped. Each scheme has a specific characteristic on the rate and volume deal, where the last three schemes are regarded as strategic schemes to be offered to business partner to ensure incoming traffic volume for both parties. This company and each business partner need to choose an optimal agreement in a certain period of time that can maximize the company’s profit. In this agreement, both parties agree to use a certain trading scheme, rate and rate/volume/revenue deal. A decision support system is then needed in order to give a comprehensive information to the sales officers to deal with the business partners. This paper discusses the mathematical model of the optimal decision for incoming traffic volume control, which is a part of the analysis needed to build the decision support system. The mathematical model is built by first performing data analysis to see how elastic the incoming traffic volume is. As the level of elasticity is obtained, we then derive a mathematical modelling that can simulate the impact of any decision on trading to the revenue of the company. The optimal decision can be obtained from these simulations results. To evaluate the performance of the proposed method we implement our decision model to the historical data. A software tool incorporating our methodology is currently in construction.

  19. Simulating the water budget of a Prairie Potholes complex from LiDAR and hydrological models in North Dakota, USA

    USGS Publications Warehouse

    Huang, Shengli; Young, Claudia; Abdul-Aziz, Omar I.; Dahal, Devendra; Feng, Min; Liu, Shuguang

    2013-01-01

    Hydrological processes of the wetland complex in the Prairie Pothole Region (PPR) are difficult to model, partly due to a lack of wetland morphology data. We used Light Detection And Ranging (LiDAR) data sets to derive wetland features; we then modelled rainfall, snowfall, snowmelt, runoff, evaporation, the “fill-and-spill” mechanism, shallow groundwater loss, and the effect of wet and dry conditions. For large wetlands with a volume greater than thousands of cubic metres (e.g. about 3000 m3), the modelled water volume agreed fairly well with observations; however, it did not succeed for small wetlands (e.g. volume less than 450 m3). Despite the failure for small wetlands, the modelled water area of the wetland complex coincided well with interpretation of aerial photographs, showing a linear regression with R2 of around 0.80 and a mean average error of around 0.55 km2. The next step is to improve the water budget modelling for small wetlands.

  20. FOSSIL2 energy policy model documentation: FOSSIL2 documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1980-10-01

    This report discusses the structure, derivations, assumptions, and mathematical formulation of the FOSSIL2 model. Each major facet of the model - supply/demand interactions, industry financing, and production - has been designed to parallel closely the actual cause/effect relationships determining the behavior of the United States energy system. The data base for the FOSSIL2 program is large, as is appropriate for a system dynamics simulation model. When possible, all data were obtained from sources well known to experts in the energy field. Cost and resource estimates are based on DOE data whenever possible. This report presents the FOSSIL2 model at severalmore » levels. Volumes II and III of this report list the equations that comprise the FOSSIL2 model, along with variable definitions and a cross-reference list of the model variables. Volume II provides the model equations with each of their variables defined, while Volume III lists the equations, and a one line definition for equations, in a shorter, more readable format.« less

  1. Charged hadrons in local finite-volume QED+QCD with C⋆ boundary conditions

    NASA Astrophysics Data System (ADS)

    Lucini, B.; Patella, A.; Ramos, A.; Tantalo, N.

    2016-02-01

    In order to calculate QED corrections to hadronic physical quantities by means of lattice simulations, a coherent description of electrically-charged states in finite volume is needed. In the usual periodic setup, Gauss's law and large gauge transformations forbid the propagation of electrically-charged states. A possible solution to this problem, which does not violate the axioms of local quantum field theory, has been proposed by Wiese and Polley, and is based on the use of C⋆ boundary conditions. We present a thorough analysis of the properties and symmetries of QED in isolation and QED coupled to QCD, with C⋆ boundary conditions. In particular we learn that a certain class of electrically-charged states can be constructed in a fully consistent fashion without relying on gauge fixing and without peculiar complications. This class includes single particle states of most stable hadrons. We also calculate finite-volume corrections to the mass of stable charged particles and show that these are much smaller than in non-local formulations of QED.

  2. Study of solid-conversion gaseous detector based on GEM for high energy X-ray industrial CT.

    PubMed

    Zhou, Rifeng; Zhou, Yaling

    2014-01-01

    The general gaseous ionization detectors are not suitable for high energy X-ray industrial computed tomography (HEICT) because of their inherent limitations, especially low detective efficiency and large volume. The goal of this study was to investigate a new type of gaseous detector to solve these problems. The novel detector was made by a metal foil as X-ray convertor to improve the conversion efficiency, and the Gas Electron Multiplier (hereinafter "GEM") was used as electron amplifier to lessen its volume. The detective mechanism and signal formation of the detector was discussed in detail. The conversion efficiency was calculated by using EGSnrc Monte Carlo code, and the transport course of photon and secondary electron avalanche in the detector was simulated with the Maxwell and Garfield codes. The result indicated that this detector has higher conversion efficiency as well as less volume. Theoretically this kind of detector could be a perfect candidate for replacing the conventional detector in HEICT.

  3. Kinetic attractor phase diagrams of active nematic suspensions: the dilute regime.

    PubMed

    Forest, M Gregory; Wang, Qi; Zhou, Ruhai

    2015-08-28

    Large-scale simulations by the authors of the kinetic-hydrodynamic equations for active polar nematics revealed a variety of spatio-temporal attractors, including steady and unsteady, banded (1d) and cellular (2d) spatial patterns. These particle scale activation-induced attractors arise at dilute nanorod volume fractions where the passive equilibrium phase is isotropic, whereas all previous model simulations have focused on the semi-dilute, nematic equilibrium regime and mostly on low-moment orientation tensor and polarity vector models. Here we extend our previous results to complete attractor phase diagrams for active nematics, with and without an explicit polar potential, to map out novel spatial and dynamic transitions, and to identify some new attractors, over the parameter space of dilute nanorod volume fraction and nanorod activation strength. The particle-scale activation parameter corresponds experimentally to a tunable force dipole strength (so-called pushers with propulsion from the rod tail) generated by active rod macromolecules, e.g., catalysis with the solvent phase, ATP-induced propulsion, or light-activated propulsion. The simulations allow 2d spatial variations in all flow and orientational variables and full spherical orientational degrees of freedom; the attractors correspond to numerical integration of a coupled system of 125 nonlinear PDEs in 2d plus time. The phase diagrams with and without the polar interaction potential are remarkably similar, implying that polar interactions among the rodlike particles are not essential to long-range spatial and temporal correlations in flow, polarity, and nematic order. As a general rule, above a threshold, low volume fractions induce 1d banded patterns, whereas higher yet still dilute volume fractions yield 2d patterns. Again as a general rule, varying activation strength at fixed volume fraction induces novel dynamic transitions. First, stationary patterns saturate the instability of the isotropic state, consisting of discrete 1d banded or 2d cellular patterns depending on nanorod volume fraction. Increasing activation strength further induces a sequence of attractor bifurcations, including oscillations superimposed on the 1d and 2d stationary patterns, a uniform translational motion of 1d and 2d oscillating patterns, and periodic switching between 1d and 2d patterns. These results imply that active macromolecular suspensions are capable of long-range spatial and dynamic organization at isotropic equilibrium concentrations, provided particle-scale activation is sufficiently strong.

  4. Towards data warehousing and mining of protein unfolding simulation data.

    PubMed

    Berrar, Daniel; Stahl, Frederic; Silva, Candida; Rodrigues, J Rui; Brito, Rui M M; Dubitzky, Werner

    2005-10-01

    The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.

  5. Evaluation of HIFU-induced lesion region using temperature threshold and equivalent thermal dose methods

    NASA Astrophysics Data System (ADS)

    Chang, Shihui; Xue, Fanfan; Zhou, Wenzheng; Zhang, Ji; Jian, Xiqi

    2017-03-01

    Usually, numerical simulation is used to predict the acoustic filed and temperature distribution of high intensity focused ultrasound (HIFU). In this paper, the simulated lesion volumes obtained by temperature threshold (TRT) 60 °C and equivalent thermal dose (ETD) 240 min were compared with the experimental results which were obtained by animal tissue experiment in vitro. In the simulation, the calculated model was established according to the vitro tissue experiment, and the Finite Difference Time Domain (FDTD) method was used to calculate the acoustic field and temperature distribution in bovine liver by the Westervelt formula and Pennes bio-heat transfer equation, and the non-linear characteristics of the ultrasound was considered. In the experiment, the fresh bovine liver was exposed for 8s, 10s, 12s under different power conditions (150W, 170W, 190W, 210W), and the exposure was repeated 6 times under the same dose. After the exposures, the liver was sliced and photographed every 0.2mm, and the area of the lesion region in every photo was calculated. Then, every value of the areas was multiplied by 0.2mm, and summed to get the approximation volume of the lesion region. The comparison result shows that the lesion volume of the region calculated by TRT 60 °C in simulation was much closer to the lesion volume obtained in experiment, and the volume of the region above 60 °C was larger than the experimental results, but the volume deviation was not exceed 10%. The volume of the lesion region calculated by ETD 240 min was larger than that calculated by TRT 60 °C in simulation, and the volume deviations were ranged from 4.9% to 23.7%.

  6. Technical Note: Is bulk electron density assignment appropriate for MRI-only based treatment planning for lung cancer?

    PubMed

    Prior, Phil; Chen, Xinfeng; Gore, Elizabeth; Johnstone, Candice; Li, X Allen

    2017-07-01

    MRI-based treatment planning in radiation therapy (RT) is prohibitive, in part, due to the lack of electron density (ED) information within the image. The dosimetric differences between MRI- and CT-based planning for intensity modulated RT (IMRT) of lung cancer were investigated to assess the appropriateness of bulk ED assignment. Planning CTs acquired for six representative lung cancer patients were used to generate bulk ED IMRT plans. To avoid the effect of anatomic differences between CT and MRI, "simulated MRI-based plans" were generated by forcing the relative ED (rED) to water on CT-delineated structures using organ specific values from the ICRU Report 46 and using the mean rED value of the internal target volume (ITV) from the planning CT. The "simulated MRI-based plans" were generated using a research planning system (Monaco v5.09.07a, Elekta, AB) and employing Monte Carlo dose calculation. The following dose-volume-parameters (DVPs) were collected from both the "simulated MRI-based plans" and the original planning CT: D 95 , the dose delivered to 95% of the ITV & planning target volume (PTV), D 5 and V 5 , the volume of normal lung irradiated ≥5 Gy. The percent point difference and relative dose difference were used for comparison with the CT based plan for V 5 and D 95 respectively. A total of five plans per patient were generated; three with the ITV rED (rED ITV ) = 1.06, 1.0 and the mean value from the planning CT while the lung rED (rED lung ) was fixed at the ICRU value of 0.26 and two with rED lung = 0.1 and 0.5 while the rED ITV was fixed to the mean value from the planning CT. Noticeable differences in the ITV and PTV DVPs were observed. Variations of the normal lung V 5 can be as large as 9.6%. In some instances, varying the rED ITV between rED mean and 1.06 resulted in D 95 increases ranging from 3.9% to 6.3%. Bulk rED assignment on normal lung affected the DVPs of the ITV and PTV by 4.0-9.8% and 0.3-19.6% respectively. Dose volume histograms were presented for representative cases where the variations in the DVPs were found to be very large or very small. The commonly used bulk rED assignment in MRI-only based planning may not be appropriate for lung cancer. A voxel based method, e.g., synthetic CT generated from MRI data, is likely required for dosimetrically accurate MR-based planning for lung cancer. © 2017 American Association of Physicists in Medicine.

  7. A Coupled Approach with Stochastic Rainfall-Runoff Simulation and Hydraulic Modeling for Extreme Flood Estimation on Large Watersheds

    NASA Astrophysics Data System (ADS)

    Paquet, E.

    2015-12-01

    The SCHADEX method aims at estimating the distribution of peak and daily discharges up to extreme quantiles. It couples a precipitation probabilistic model based on weather patterns, with a stochastic rainfall-runoff simulation process using a conceptual lumped model. It allows exploring an exhaustive set of hydrological conditions and watershed responses to intense rainfall events. Since 2006, it has been widely applied in France to about one hundred watersheds for dam spillway design, and also aboard (Norway, Canada and central Europe among others). However, its application to large watersheds (above 10 000 km²) faces some significant issues: spatial heterogeneity of rainfall and hydrological processes and flood peak damping due to hydraulic effects (flood plains, natural or man-made embankment) being the more important. This led to the development of an extreme flood simulation framework for large and heterogeneous watersheds, based on the SCHADEX method. Its main features are: Division of the large (or main) watershed into several smaller sub-watersheds, where the spatial homogeneity of the hydro-meteorological processes can reasonably be assumed, and where the hydraulic effects can be neglected. Identification of pilot watersheds where discharge data are available, thus where rainfall-runoff models can be calibrated. They will be parameters donors to non-gauged watersheds. Spatially coherent stochastic simulations for all the sub-watersheds at the daily time step. Identification of a selection of simulated events for a given return period (according to the distribution of runoff volumes at the scale of the main watershed). Generation of the complete hourly hydrographs at each of the sub-watersheds outlets. Routing to the main outlet with hydraulic 1D or 2D models. The presentation will be illustrated with the case-study of the Isère watershed (9981 km), a French snow-driven watershed. The main novelties of this method will be underlined, as well as its perspectives and future improvements.

  8. Quantitative risk assessment integrated with process simulator for a new technology of methanol production plant using recycled CO₂.

    PubMed

    Di Domenico, Julia; Vaz, Carlos André; de Souza, Maurício Bezerra

    2014-06-15

    The use of process simulators can contribute with quantitative risk assessment (QRA) by minimizing expert time and large volume of data, being mandatory in the case of a future plant. This work illustrates the advantages of this association by integrating UNISIM DESIGN simulation and QRA to investigate the acceptability of a new technology of a Methanol Production Plant in a region. The simulated process was based on the hydrogenation of chemically sequestered carbon dioxide, demanding stringent operational conditions (high pressures and temperatures) and involving the production of hazardous materials. The estimation of the consequences was performed using the PHAST software, version 6.51. QRA results were expressed in terms of individual and social risks. Compared to existing tolerance levels, the risks were considered tolerable in nominal conditions of operation of the plant. The use of the simulator in association with the QRA also allowed testing the risk in new operating conditions in order to delimit safe regions for the plant. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Determining erosion relevant soil characteristics with a small-scale rainfall simulator

    NASA Astrophysics Data System (ADS)

    Schindewolf, M.; Schmidt, J.

    2009-04-01

    The use of soil erosion models is of great importance in soil and water conservation. Routine application of these models on the regional scale is not at least limited by the high parameter demands. Although the EROSION 3D simulation model is operating with a comparable low number of parameters, some of the model input variables could only be determined by rainfall simulation experiments. The existing data base of EROSION 3D was created in the mid 90s based on large-scale rainfall simulation experiments on 22x2m sized experimental plots. Up to now this data base does not cover all soil and field conditions adequately. Therefore a new campaign of experiments would be essential to produce additional information especially with respect to the effects of new soil management practices (e.g. long time conservation tillage, non tillage). The rainfall simulator used in the actual campaign consists of 30 identic modules, which are equipped with oscillating rainfall nozzles. Veejet 80/100 (Spraying Systems Co., Wheaton, IL) are used in order to ensure best possible comparability to natural rainfalls with respect to raindrop size distribution and momentum transfer. Central objectives of the small-scale rainfall simulator are - effectively application - provision of comparable results to large-scale rainfall simulation experiments. A crucial problem in using the small scale simulator is the restriction on rather small volume rates of surface runoff. Under this conditions soil detachment is governed by raindrop impact. Thus impact of surface runoff on particle detachment cannot be reproduced adequately by a small-scale rainfall simulator With this problem in mind this paper presents an enhanced small-scale simulator which allows a virtual multiplication of the plot length by feeding additional sediment loaded water to the plot from upstream. Thus is possible to overcome the plot length limited to 3m while reproducing nearly similar flow conditions as in rainfall experiments on standard plots. The simulator is extensively applied to plots of different soil types, crop types and management systems. The comparison with existing data sets obtained by large-scale rainfall simulations show that results can adequately be reproduced by the applied combination of small-scale rainfall simulator and sediment loaded water influx.

  10. A new multicompartmental reaction-diffusion modeling method links transient membrane attachment of E. coli MinE to E-ring formation.

    PubMed

    Arjunan, Satya Nanda Vel; Tomita, Masaru

    2010-03-01

    Many important cellular processes are regulated by reaction-diffusion (RD) of molecules that takes place both in the cytoplasm and on the membrane. To model and analyze such multicompartmental processes, we developed a lattice-based Monte Carlo method, Spatiocyte that supports RD in volume and surface compartments at single molecule resolution. Stochasticity in RD and the excluded volume effect brought by intracellular molecular crowding, both of which can significantly affect RD and thus, cellular processes, are also supported. We verified the method by comparing simulation results of diffusion, irreversible and reversible reactions with the predicted analytical and best available numerical solutions. Moreover, to directly compare the localization patterns of molecules in fluorescence microscopy images with simulation, we devised a visualization method that mimics the microphotography process by showing the trajectory of simulated molecules averaged according to the camera exposure time. In the rod-shaped bacterium Escherichia coli, the division site is suppressed at the cell poles by periodic pole-to-pole oscillations of the Min proteins (MinC, MinD and MinE) arising from carefully orchestrated RD in both cytoplasm and membrane compartments. Using Spatiocyte we could model and reproduce the in vivo MinDE localization dynamics by accounting for the previously reported properties of MinE. Our results suggest that the MinE ring, which is essential in preventing polar septation, is largely composed of MinE that is transiently attached to the membrane independently after recruited by MinD. Overall, Spatiocyte allows simulation and visualization of complex spatial and reaction-diffusion mediated cellular processes in volumes and surfaces. As we showed, it can potentially provide mechanistic insights otherwise difficult to obtain experimentally. The online version of this article (doi:10.1007/s11693-009-9047-2) contains supplementary material, which is available to authorized users.

  11. The Effect of Total Tumor Volume on the Biologically Effective Dose to Tumor and Kidneys for 177Lu-Labeled PSMA Peptides.

    PubMed

    Begum, Nusrat J; Thieme, Anne; Eberhardt, Nina; Tauber, Robert; D'Alessandria, Calogero; Beer, Ambros J; Glatting, Gerhard; Eiber, Matthias; Kletting, Peter

    2018-06-01

    The aim of this work was to simulate the effect of prostate-specific membrane antigen (PSMA)-positive total tumor volume (TTV) on the biologically effective doses (BEDs) to tumors and organs at risk in patients with metastatic castration-resistant prostate cancer who are undergoing 177 Lu-PSMA radioligand therapy. Methods: A physiologically based pharmacokinetic model was fitted to the data of 13 patients treated with 177 Lu-PSMA I&T (a PSMA inhibitor for imaging and therapy). The tumor, kidney, and salivary gland BEDs were simulated for TTVs of 0.1-10 L. The activity and peptide amounts leading to an optimal tumor-to-kidneys BED ratio were also investigated. Results: When the TTV was increased from 0.3 to 3 L, the simulated BEDs to tumors, kidneys, parotid glands, and submandibular glands decreased from 22 ± 15 to 11.0 ± 6.0 Gy 1.49 , 6.5 ± 2.3 to 3.7 ± 1.4 Gy 2.5 , 11.0 ± 2.7 to 6.4 ± 1.9 Gy 4.5 , and 10.9 ± 2.7 to 6.3 ± 1.9 Gy 4.5 , respectively (where the subscripts denote that an α/β of 1.49, 2.5, or 4.5 Gy was used to calculate the BED). The BED to the red marrow increased from 0.17 ± 0.05 to 0.32 ± 0.11 Gy 15 For patients with a TTV of more than 0.3 L, the optimal amount of peptide was 273 ± 136 nmol and the optimal activity was 10.4 ± 4.4 GBq. Conclusion: This simulation study suggests that in patients with large PSMA-positive tumor volumes, higher activities and peptide amounts can be safely administered to maximize tumor BEDs without exceeding the tolerable BED to the organs at risk. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  12. Traffic analysis toolbox volume XIII : integrated corridor management analysis, modeling, and simulation guide

    DOT National Transportation Integrated Search

    2017-02-01

    As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...

  13. Traffic analysis toolbox volume XIII : integrated corridor management analysis, modeling, and simulation guide.

    DOT National Transportation Integrated Search

    2017-02-01

    As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...

  14. Recovery Act: Oxy-Combustion Techology Development for Industrial-Scale Boiler Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levasseur, Armand

    2014-04-30

    Alstom Power Inc. (Alstom), under U.S. DOE/NETL Cooperative Agreement No. DE-NT0005290, is conducting a development program to generate detailed technical information needed for application of oxy-combustion technology. The program is designed to provide the necessary information and understanding for the next step of large-scale commercial demonstration of oxy combustion in tangentially fired boilers and to accelerate the commercialization of this technology. The main project objectives include: • Design and develop an innovative oxyfuel system for existing tangentially-fired boiler units that minimizes overall capital investment and operating costs. • Evaluate performance of oxyfuel tangentially fired boiler systems in pilot scale testsmore » at Alstom’s 15 MWth tangentially fired Boiler Simulation Facility (BSF). • Address technical gaps for the design of oxyfuel commercial utility boilers by focused testing and improvement of engineering and simulation tools. • Develop the design, performance and costs for a demonstration scale oxyfuel boiler and auxiliary systems. • Develop the design and costs for both industrial and utility commercial scale reference oxyfuel boilers and auxiliary systems that are optimized for overall plant performance and cost. • Define key design considerations and develop general guidelines for application of results to utility and different industrial applications. The project was initiated in October 2008 and the scope extended in 2010 under an ARRA award. The project completion date was April 30, 2014. Central to the project is 15 MWth testing in the BSF, which provided in-depth understanding of oxy-combustion under boiler conditions, detailed data for improvement of design tools, and key information for application to commercial scale oxy-fired boiler design. Eight comprehensive 15 MWth oxy-fired test campaigns were performed with different coals, providing detailed data on combustion, emissions, and thermal behavior over a matrix of fuels, oxyprocess variables and boiler design parameters. Significant improvement of CFD modeling tools and validation against 15 MWth experimental data has been completed. Oxy-boiler demonstration and large reference designs have been developed, supported with the information and knowledge gained from the 15 MWth testing. The results from the 15 MWth testing in the BSF and complimentary bench-scale testing are addressed in this volume (Volume II) of the final report. The results of the modeling efforts (Volume III) and the oxy boiler design efforts (Volume IV) are reported in separate volumes.« less

  15. The big fat LARS - a LArge Reservoir Simulator for hydrate formation and gas production

    NASA Astrophysics Data System (ADS)

    Beeskow-Strauch, Bettina; Spangenberg, Erik; Schicks, Judith M.; Giese, Ronny; Luzi-Helbing, Manja; Priegnitz, Mike; Klump, Jens; Thaler, Jan; Abendroth, Sven

    2013-04-01

    Simulating natural scenarios on lab scale is a common technique to gain insight into geological processes with moderate effort and expenses. Due to the remote occurrence of gas hydrates, their behavior in sedimentary deposits is largely investigated on experimental set ups in the laboratory. In the framework of the submarine gas hydrate research project (SUGAR) a large reservoir simulator (LARS) with an internal volume of 425 liter has been designed, built and tested. To our knowledge this is presently a word-wide unique set up. Because of its large volume it is suitable for pilot plant scale tests on hydrate behavior in sediments. That includes not only the option of systematic tests on gas hydrate formation in various sedimentary settings but also the possibility to mimic scenarios for the hydrate decomposition and subsequent natural gas extraction. Based on these experimental results various numerical simulations can be realized. Here, we present the design and the experimental set up of LARS. The prerequisites for the simulation of a natural gas hydrate reservoir are porous sediments, methane, water, low temperature and high pressure. The reservoir is supplied by methane-saturated and pre-cooled water. For its preparation an external gas-water mixing stage is available. The methane-loaded water is continuously flushed into LARS as finely dispersed fluid via bottom-and-top-located sparger. The LARS is equipped with a mantle cooling system and can be kept at a chosen set temperature. The temperature distribution is monitored at 14 reasonable locations throughout the reservoir by Pt100 sensors. Pressure needs are realized using syringe pump stands. A tomographic system, consisting of a 375-electrode-configuration is attached to the mantle for the monitoring of hydrate distribution throughout the entire reservoir volume. Two sets of tubular polydimethylsiloxan-membranes are applied to determine gas-water ratio within the reservoir using the effect of permeability differences between gaseous and dissolved methane (Zimmer et al., 2011). Gas hydrate is formed using a confined pressure of 12-15 MPa and a fluid pressure of 8-11 MPa with a set temperature of 275 K. The duration of the formation process depends on the required hydrate saturation and is usually in a range of several weeks. The subsequent decomposition experiments aiming at testing innovative production scenarios such as the application of a borehole tool for thermal stimulation of hydrate via catalytic oxidation of methane within an autothermal catalytic reactor (Schicks et al. 2011). Furthermore, experiments on hydrate decomposition via pressure reduction are performed to mimic realistic scenarios such as found during the production test in Mallik (Yasuda and Dallimore, 2007). In the near future it is planned to scale up existing results on CH4-CO2 exchange efficiency (e.g. Strauch and Schicks, 2012) by feeding CO2 to the hydrate reservoir. All experiments are due to the gain of high-resolution spatial and temporal data predestined as a base for numerical modeling. References Schicks, J. M., Spangenberg, E., Giese, R., Steinhauer, B., Klump, J., Luzi, M., 2011. Energies, 4, 1, 151-172. Zimmer, M., Erzinger, J., Kujawa, C., 2011. Int. J. of Greenhouse Gas Control, 5, 4, 995-1001. Yasuda, M., Dallimore, S. J., 2007. Jpn. Assoc. Pet. Technol., 72, 603-607. Beeskow-Strauch, B., Schicks, J.M., 2012. Energies, 5, 420-437.

  16. A survey of electric and hybrid vehicles simulation programs. Volume 2: Questionnaire responses

    NASA Technical Reports Server (NTRS)

    Bevan, J.; Heimburger, D. A.; Metcalfe, M. A.

    1978-01-01

    The data received in a survey conducted within the United States to determine the extent of development and capabilities of automotive performance simulation programs suitable for electric and hybrid vehicle studies are presented. The survey was conducted for the Department of Energy by NASA's Jet Propulsion Laboratory. Volume 1 of this report summarizes and discusses the results contained in Volume 2.

  17. Developing and Testing Simulated Occupational Experiences for Distributive Education Students in Rural Communities: Volume III: Training Plans: Final Report.

    ERIC Educational Resources Information Center

    Virginia Polytechnic Inst. and State Univ., Blacksburg.

    Volume 3 of a three volume final report presents prototype job training plans developed as part of a research project which pilot tested a distributive education program for rural schools utilizing a retail store simulation plan. The plans are for 15 entry-level and 15 career-level jobs in seven categories of distributive business (department…

  18. Hierarchical Theoretical Methods for Understanding and Predicting Anisotropic Thermal Transport Release in Rocket Propellant Formulations

    DTIC Science & Technology

    2016-12-08

    mesoscopic models of interfaces and interphases, and microstructure-resolved representative volume element simulations. Atomic simulations were...title and subtitle with volume number and part number, if applicable. On classified documents, enter the title classification in parentheses. 5a...careful prediction of the pressure- volume -temperature equation of state, pressure- and temperature-dependent crystal and liquid thermal and transport

  19. Action-based Dynamical Modeling for the Milky Way Disk: The Influence of Spiral Arms

    NASA Astrophysics Data System (ADS)

    Trick, Wilma H.; Bovy, Jo; D'Onghia, Elena; Rix, Hans-Walter

    2017-04-01

    RoadMapping is a dynamical modeling machinery developed to constrain the Milky Way’s (MW) gravitational potential by simultaneously fitting an axisymmetric parametrized potential and an action-based orbit distribution function (DF) to discrete 6D phase-space measurements of stars in the Galactic disk. In this work, we demonstrate RoadMapping's robustness in the presence of spiral arms by modeling data drawn from an N-body simulation snapshot of a disk-dominated galaxy of MW mass with strong spiral arms (but no bar), exploring survey volumes with radii 500 {pc}≤slant {r}\\max ≤slant 5 {kpc}. The potential constraints are very robust, even though we use a simple action-based DF, the quasi-isothermal DF. The best-fit RoadMapping model always recovers the correct gravitational forces where most of the stars that entered the analysis are located, even for small volumes. For data from large survey volumes, RoadMapping finds axisymmetric models that average well over the spiral arms. Unsurprisingly, the models are slightly biased by the excess of stars in the spiral arms. Gravitational potential models derived from survey volumes with at least {r}\\max =3 {kpc} can be reliably extrapolated to larger volumes. However, a large radial survey extent, {r}\\max ˜ 5 {kpc}, is needed to correctly recover the halo scale length. In general, the recovery and extrapolability of potentials inferred from data sets that were drawn from inter-arm regions appear to be better than those of data sets drawn from spiral arms. Our analysis implies that building axisymmetric models for the Galaxy with upcoming Gaia data will lead to sensible and robust approximations of the MW’s potential.

  20. SU-E-T-546: Use of Implant Volume for Quality Assurance of Low Dose Rate Brachytherapy Treatment Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, D; Kolar, M

    Purpose: To analyze the application of volume implant (V100) data as a method for a global check of low dose rate (LDR) brachytherapy plans. Methods: Treatment plans for 335 consecutive patients undergoing permanent seed implants for prostate cancer and for 113 patients treated with plaque therapy for ocular melanoma were analyzed. Plaques used were 54 COMS (10 to 20 mm, notched and regular) and 59 Eye Physics EP917s with variable loading. Plots of treatment time x implanted activity per unit dose versus v100 ^.667 were made. V100 values were obtained using dose volume histograms calculated by the treatment planning systemsmore » (Variseed 8.02 and Plaque Simulator 5.4). Four different physicists were involved in planning the prostate seed cases; two physicists for the eye plaques. Results: Since the time and dose for the prostate cases did not vary, a plot of implanted activity vs V100 ^.667 was made. A linear fit with no intercept had an r{sup 2} = 0.978; more than 94% of the actual activities fell within 5% of the activities calculated from the linear fit. The greatest deviations were in cases where the implant volumes were large (> 100 cc). Both COMS and EP917 plaque linear fits were good (r{sup 2} = .967 and .957); the largest deviations were seen for large volumes. Conclusions: The method outlined here is effective for checking planning consistency and quality assurance of two types of LDR brachytherapy treatment plans (temporary and permanent). A spreadsheet for the calculations enables a quick check of the plan in situations were time is short (e.g. OR-based prostate planning)« less

  1. Potential of lattice Boltzmann to model droplets on chemically stripe-patterned substrates

    NASA Astrophysics Data System (ADS)

    Patrick Jansen, H.; Sotthewes, K.; Zandvliet, Harold J. W.; Kooij, E. Stefan

    2016-01-01

    Lattice Boltzmann modelling (LBM) has recently been applied to a range of different wetting situations. Here we demonstrate its potential in representing complex kinetic effects encountered in droplets on chemically stripe-patterned surfaces. An ultimate example of the power of LBM is provided by comparing simulations and experiments of impacting droplets with varying Weber numbers. Also, the shape evolution of droplets is discussed in relation to their final shape. The latter can then be compared to Surface Evolver (SE) results, since under the proper boundary conditions both approaches should yield the same configuration in a static state. During droplet growth in LBM simulations, achieved by increasing the density within the droplet, the contact line initially advances in the direction parallel to the stripes, therewith increasing its aspect ratio. Once the volume becomes too large the droplet starts wetting additional stripes, leading to a lower aspect ratio. The maximum aspect ratio is shown to be a function of the width ratio of the hydrophobic and hydrophilic stripes and also their absolute widths. In the limit of sufficiently large stripe widths the aspect ratio is solely dependent on the relative stripe widths. The maximum droplet aspect ratio in the LBM simulations is compared to SE simulations and results are shown to be in good agreement. Additionally, we also show the ability of LBM to investigate single stripe wetting, enabling determination of the maximum aspect ratio that can be achieved in the limit of negligible hydrophobic stripe width, under the constraint that the stripe widths are large enough such that they are not easily crossed.

  2. General relativistic screening in cosmological simulations

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Paranjape, Aseem

    2016-10-01

    We revisit the issue of interpreting the results of large volume cosmological simulations in the context of large-scale general relativistic effects. We look for simple modifications to the nonlinear evolution of the gravitational potential ψ that lead on large scales to the correct, fully relativistic description of density perturbations in the Newtonian gauge. We note that the relativistic constraint equation for ψ can be cast as a diffusion equation, with a diffusion length scale determined by the expansion of the Universe. Exploiting the weak time evolution of ψ in all regimes of interest, this equation can be further accurately approximated as a Helmholtz equation, with an effective relativistic "screening" scale ℓ related to the Hubble radius. We demonstrate that it is thus possible to carry out N-body simulations in the Newtonian gauge by replacing Poisson's equation with this Helmholtz equation, involving a trivial change in the Green's function kernel. Our results also motivate a simple, approximate (but very accurate) gauge transformation—δN(k )≈δsim(k )×(k2+ℓ-2)/k2 —to convert the density field δsim of standard collisionless N -body simulations (initialized in the comoving synchronous gauge) into the Newtonian gauge density δN at arbitrary times. A similar conversion can also be written in terms of particle positions. Our results can be interpreted in terms of a Jeans stability criterion induced by the expansion of the Universe. The appearance of the screening scale ℓ in the evolution of ψ , in particular, leads to a natural resolution of the "Jeans swindle" in the presence of superhorizon modes.

  3. Ceres’ impact craters: probes of near-surface internal structure and composition

    NASA Astrophysics Data System (ADS)

    Bland, Michael T.; Raymond, Carol; Park, Ryan; Schenk, Paul; McCord, Tom; Reddy, Vishnu; King, Scott; Sykes, Mark; Russell, Chris

    2015-11-01

    Dawn Framing Camera images of Ceres have revealed the existence of a heavily cratered surface. Shape models derived from these images indicate that most (though not all) large craters are quite deep: up to 6 km for craters larger than 100 km in diameter. The retention of deep craters is not consistent with a simple differentiated internal structure consisting of an outer layer composed solely of pure water ice (covered with a rocky lag) overlying a rocky core. Here we use finite element simulations to show that, for Ceres’ relatively warm surface temperatures, the timescale required to completely flatten a crater 60-km in diameter (or greater) is less than 100 Myr, assuming a relatively pure outer ice layer (for ice grain sizes ≤ 1 cm). Preserving substantial topography requires that the viscosity of Ceres’ outer-most layer (25-50 km thick) is substantially greater than that of pure water ice. A factor of ten increase in viscosity can be achieved by assuming the layer is a 50/50 ice-rock mixture by volume; however, our simulations show that such an increase is insufficient to prevent substantial relaxation over timescales of 1 Gyr. Only particulate volume fractions greater than 50% provide an increase in viscosity sufficient to prevent large-scale, rapid relaxation. Such volume fractions suggest an outer layer composed of frozen soil/regolith (i.e., more rock than ice by volume), a very salt-rich layer, or both. Notably, while most basins appear quite deep, a few relatively shallow basins have been observed (e.g., Coniraya), suggesting that relaxation may be occurring over very long timescales (e.g., 4 Ga), that Ceres’ interior is compositionally and spatial heterogeneous, and/or that temporal evolution of the interior structure and composition has occurred. If these shallow basins are in fact the result of relaxation, it places an upper limit on the viscosity of Ceres’ outer-most interior layer, implying at least some low-viscosity material is present and likely eliminating the possibility of a purely rocky (homogeneous, low density, high porosity) interior.

  4. LYα FOREST TOMOGRAPHY FROM BACKGROUND GALAXIES: THE FIRST MEGAPARSEC-RESOLUTION LARGE-SCALE STRUCTURE MAP AT z > 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Khee-Gan; Hennawi, Joseph F.; Eilers, Anna-Christina

    2014-11-01

    We present the first observations of foreground Lyα forest absorption from high-redshift galaxies, targeting 24 star-forming galaxies (SFGs) with z ∼ 2.3-2.8 within a 5' × 14' region of the COSMOS field. The transverse sightline separation is ∼2 h {sup –1} Mpc comoving, allowing us to create a tomographic reconstruction of the three-dimensional (3D) Lyα forest absorption field over the redshift range 2.20 ≤ z ≤ 2.45. The resulting map covers 6 h {sup –1} Mpc × 14 h {sup –1} Mpc in the transverse plane and 230 h {sup –1} Mpc along the line of sight with a spatialmore » resolution of ≈3.5 h {sup –1} Mpc, and is the first high-fidelity map of a large-scale structure on ∼Mpc scales at z > 2. Our map reveals significant structures with ≳ 10 h {sup –1} Mpc extent, including several spanning the entire transverse breadth, providing qualitative evidence for the filamentary structures predicted to exist in the high-redshift cosmic web. Simulated reconstructions with the same sightline sampling, spectral resolution, and signal-to-noise ratio recover the salient structures present in the underlying 3D absorption fields. Using data from other surveys, we identified 18 galaxies with known redshifts coeval with our map volume, enabling a direct comparison with our tomographic map. This shows that galaxies preferentially occupy high-density regions, in qualitative agreement with the same comparison applied to simulations. Our results establish the feasibility of the CLAMATO survey, which aims to obtain Lyα forest spectra for ∼1000 SFGs over ∼1 deg{sup 2} of the COSMOS field, in order to map out the intergalactic medium large-scale structure at (z) ∼ 2.3 over a large volume (100 h {sup –1} Mpc){sup 3}.« less

  5. Radiotherapy volume delineation using 18F-FDG-PET/CT modifies gross node volume in patients with oesophageal cancer.

    PubMed

    Jimenez-Jimenez, E; Mateos, P; Aymar, N; Roncero, R; Ortiz, I; Gimenez, M; Pardo, J; Salinas, J; Sabater, S

    2018-05-02

    Evidence supporting the use of 18F-FDG-PET/CT in the segmentation process of oesophageal cancer for radiotherapy planning is limited. Our aim was to compare the volumes and tumour lengths defined by fused PET/CT vs. CT simulation. Twenty-nine patients were analyzed. All patients underwent a single PET/CT simulation scan. Two separate GTVs were defined: one based on CT data alone and another based on fused PET/CT data. Volume sizes for both data sets were compared and the spatial overlap was assessed by the Dice similarity coefficient (DSC). The gross tumour volume (GTVtumour) and maximum tumour diameter were greater by PET/CT, and length of primary tumour was greater by CT, but differences were not statistically significant. However, the gross node volume (GTVnode) was significantly greater by PET/CT. The DSC analysis showed excellent agreement for GTVtumour, 0.72, but was very low for GTVnode, 0.25. Our study shows that the volume definition by PET/CT and CT data differs. CT simulation, without taking into account PET/CT information, might leave cancer-involved nodes out of the radiotherapy-delineated volumes.

  6. Simulation of streamflow and water quality in the Leon Creek watershed, Bexar County, Texas, 1997-2004

    USGS Publications Warehouse

    Ockerman, Darwin J.; Roussel, Meghan C.

    2009-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers and the San Antonio River Authority, configured, calibrated, and tested a Hydrological Simulation Program ? FORTRAN watershed model for the approximately 238-square-mile Leon Creek watershed in Bexar County, Texas, and used the model to simulate streamflow and water quality (focusing on loads and yields of selected constituents). Streamflow in the model was calibrated and tested with available data from five U.S. Geological Survey streamflow-gaging stations for 1997-2004. Simulated streamflow volumes closely matched measured streamflow volumes at all streamflow-gaging stations. Total simulated streamflow volumes were within 10 percent of measured values. Streamflow volumes are greatly influenced by large storms. Two months that included major floods accounted for about 50 percent of all the streamflow measured at the most downstream gaging station during 1997-2004. Water-quality properties and constituents (water temperature, dissolved oxygen, suspended sediment, dissolved ammonia nitrogen, dissolved nitrate nitrogen, and dissolved and total lead and zinc) in the model were calibrated using available data from 13 sites in and near the Leon Creek watershed for varying periods of record during 1992-2005. Average simulated daily mean water temperature and dissolved oxygen at the most downstream gaging station during 1997-2000 were within 1 percent of average measured daily mean water temperature and dissolved oxygen. Simulated suspended-sediment load at the most downstream gaging station during 2001-04 (excluding July 2002 because of major storms) was 77,700 tons compared with 74,600 tons estimated from a streamflow-load regression relation (coefficient of determination = .869). Simulated concentrations of dissolved ammonia nitrogen and dissolved nitrate nitrogen closely matched measured concentrations after calibration. At the most downstream gaging station, average simulated monthly mean concentrations of dissolved ammonia and nitrate concentrations during 1997-2004 were 0.03 and 0.37 milligram per liter, respectively. For the most downstream station, the measured and simulated concentrations of dissolved and total lead and zinc for stormflows during 1993-97 after calibration do not match particularly closely. For base-flow conditions during 1997-2004 at the most downstream station, the simulated/measured match is better. For example, median simulated concentration of total lead (for 2,041 days) was 0.96 microgram per liter, and median measured concentration (for nine samples) of total lead was 1.0 microgram per liter. To demonstrate an application of the Leon Creek watershed model, streamflow constituent loads and yields for suspended sediment, dissolved nitrate nitrogen, and total lead were simulated at the mouth of Leon Creek (outlet of the watershed) for 1997-2004. The average suspended-sediment load was 51,800 tons per year. The average suspended-sediment yield was 0.34 ton per acre per year. The average load of dissolved nitrate at the outlet of the watershed was 802 tons per year. The corresponding yield was 10.5 pounds per acre per year. The average load of lead at the outlet was 3,900 pounds per year. The average lead yield was 0.026 pound per acre per year. The degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error associated with the Leon Creek model. Major storms contribute most of the streamflow loads for certain constituents. For example, the three largest stormflows contributed about 64 percent of the entire suspended-sediment load at the most downstream station during 1997-2004.

  7. Toward an in-situ analytics and diagnostics framework for earth system models

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen

    2017-04-01

    The development roadmaps for many earth system models (ESM) aim for a globally cloud-resolving model targeting the pre-exascale and exascale systems of the future. The ESMs will also incorporate more complex physics, chemistry and biology - thereby vastly increasing the fidelity of the information content simulated by the model. We will then be faced with an unprecedented volume of simulation output that would need to be processed and analyzed concurrently in order to derive the valuable scientific results. We are already at this threshold with our current generation of ESMs at higher resolution simulations. Currently, the nominal I/O throughput in the Community Earth System Model (CESM) via Parallel IO (PIO) library is around 100 MB/s. If we look at the high frequency I/O requirements, it would require an additional 1 GB / simulated hour, translating to roughly 4 mins wallclock / simulated-day => 24.33 wallclock hours / simulated-model-year => 1,752,000 core-hours of charge per simulated-model-year on the Titan supercomputer at the Oak Ridge Leadership Computing Facility. There is also a pending need for 3X more volume of simulation output . Meanwhile, many ESMs use instrument simulators to run forward models to compare model simulations against satellite and ground-based instruments, such as radars and radiometers. The CFMIP Observation Simulator Package (COSP) is used in CESM as well as the Accelerated Climate Model for Energy (ACME), one of the ESMs specifically targeting current and emerging leadership-class computing platforms These simulators can be computationally expensive, accounting for as much as 30% of the computational cost. Hence the data are often written to output files that are then used for offline calculations. Again, the I/O bottleneck becomes a limitation. Detection and attribution studies also use large volume of data for pattern recognition and feature extraction to analyze weather and climate phenomenon such as tropical cyclones, atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.

  8. A Concept for a High-Energy Gamma-ray Polarimeter

    NASA Technical Reports Server (NTRS)

    Bloser, P. F.; Hunter, S. D.; Depaola, G. O.; Longo, F.

    2003-01-01

    We present a concept for an imaging gamma-ray polarimeter operating from approx. 50 MeV to approx. 1 GeV. Such an instrument would be valuable for the study of high-energy pulsars, active galactic nuclei, supernova remnants, and gamma-ray bursts. The concept makes use of pixelized gas micro-well detectors, under development at Goddard Space Flight Center, to record the electron-positron tracks from pair-production events in a large gas volume. Pixelized micro-well detectors have the potential to form large-volume 3-D track imagers with approx. 100 micron (rms) position resolution at moderate cost. The combination of high spatial resolution and a continuous low-density gas medium permits many thousands of measurements per radiation length, allowing the particle tracks to be imaged accurately before multiple scattering masks their original directions. The polarization of the incoming radiation may then be determined from the azimuthal distribution of the electron-positron pairs. We have performed Geant4 simulations of these processes to estimate the polarization sensitivity as a function of instrument parameters and event selection criteria.

  9. Numerical investigation of performance of vane-type propellant management device by VOF methods

    NASA Astrophysics Data System (ADS)

    Liu, J. T.; Zhou, C.; Wu, Y. L.; Zhuang, B. T.; Li, Y.

    2015-01-01

    The orbital propellant management performance of the vane-type tank is so important for the propellant system and it determines the lifetime of the satellite. The propellant in the tank can be extruded by helium gas. To study the two phase distribution in the vane-type surface tension tank and the capability of the vane-type propellant management device (PMD), a large volume vane-type surface tension tank is analysed using 3-D unsteady numerical simulations. VOF methods are used to analyse the location of the interface of the two phase. Performances of the propellant acquisition vanes and propellant refillable reservoir in the tank are investigated. The flow conductivity of the propellant acquisition vanes and the liquid storage capacity of propellant refillable reservoir can be affected by the value of the gravity and the volume of the propellant in the tank. To avoid the large resistance causing by surface tension in an outflow of a small hole, the design of the vanes in a propellant refillable reservoir should have suitable space.

  10. Storm Water Management Model Reference Manual Volume II ...

    EPA Pesticide Factsheets

    SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and generate runoff and pollutant loads. The routing portion of SWMM transports this runoff through a system of pipes, channels, storage/treatment devices, pumps, and regulators. SWMM tracks the quantity and quality of runoff generated within each subcatchment, and the flow rate, flow depth, and quality of water in each pipe and channel during a simulation period comprised of multiple time steps. The reference manual for this edition of SWMM is comprised of three volumes. Volume I describes SWMM’s hydrologic models, Volume II its hydraulic models, and Volume III its water quality and low impact development models. This document provides the underlying mathematics for the hydraulic calculations of the Storm Water Management Model (SWMM)

  11. Material point method modeling in oil and gas reservoirs

    DOEpatents

    Vanderheyden, William Brian; Zhang, Duan

    2016-06-28

    A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.

  12. Relation Between the Cell Volume and the Cell Cycle Dynamics in Mammalian cell

    NASA Astrophysics Data System (ADS)

    Magno, A. C. G.; Oliveira, I. L.; Hauck, J. V. S.

    2016-08-01

    The main goal of this work is to add and analyze an equation that represents the volume in a dynamical model of the mammalian cell cycle proposed by Gérard and Goldbeter (2011) [1]. The cell division occurs when the cyclinB/Cdkl complex is totally degraded (Tyson and Novak, 2011)[2] and it reaches a minimum value. At this point, the cell is divided into two newborn daughter cells and each one will contain the half of the cytoplasmic content of the mother cell. The equations of our base model are only valid if the cell volume, where the reactions occur, is constant. Whether the cell volume is not constant, that is, the rate of change of its volume with respect to time is explicitly taken into account in the mathematical model, then the equations of the original model are no longer valid. Therefore, every equations were modified from the mass conservation principle for considering a volume that changes with time. Through this approach, the cell volume affects all model variables. Two different dynamic simulation methods were accomplished: deterministic and stochastic. In the stochastic simulation, the volume affects every model's parameters which have molar unit, whereas in the deterministic one, it is incorporated into the differential equations. In deterministic simulation, the biochemical species may be in concentration units, while in stochastic simulation such species must be converted to number of molecules which are directly proportional to the cell volume. In an effort to understand the influence of the new equation a stability analysis was performed. This elucidates how the growth factor impacts the stability of the model's limit cycles. In conclusion, a more precise model, in comparison to the base model, was created for the cell cycle as it now takes into consideration the cell volume variation

  13. New Probe of Departures from General Relativity Using Minkowski Functionals.

    PubMed

    Fang, Wenjuan; Li, Baojiu; Zhao, Gong-Bo

    2017-05-05

    The morphological properties of the large scale structure of the Universe can be fully described by four Minkowski functionals (MFs), which provide important complementary information to other statistical observables such as the widely used 2-point statistics in configuration and Fourier spaces. In this work, for the first time, we present the differences in the morphology of the large scale structure caused by modifications to general relativity (to address the cosmic acceleration problem), by measuring the MFs from N-body simulations of modified gravity and general relativity. We find strong statistical power when using the MFs to constrain modified theories of gravity: with a galaxy survey that has survey volume ∼0.125(h^{-1}  Gpc)^{3} and galaxy number density ∼1/(h^{-1}  Mpc)^{3}, the two normal-branch Dvali-Gabadadze-Porrati models and the F5 f(R) model that we simulated can be discriminated from the ΛCDM model at a significance level ≳5σ with an individual MF measurement. Therefore, the MF of the large scale structure is potentially a powerful probe of gravity, and its application to real data deserves active exploration.

  14. Large-eddy simulation of human-induced contaminant transport in room compartments.

    PubMed

    Choi, J-I; Edwards, J R

    2012-02-01

    A large-eddy simulation is used to investigate contaminant transport owing to complex human and door motions and vent-system activity in room compartments where a contaminated and clean room are connected by a vestibule. Human and door motions are simulated with an immersed boundary procedure. We demonstrate the details of contaminant transport owing to human- and door-motion-induced wake development during a short-duration event involving the movement of a person (or persons) from a contaminated room, through a vestibule, into a clean room. Parametric studies that capture the effects of human walking pattern, door operation, over-pressure level, and vestibule size are systematically conducted. A faster walking speed results in less mass transport from the contaminated room into the clean room. The net effect of increasing the volume of the vestibule is to reduce the contaminant transport. The results show that swinging-door motion is the dominant transport mechanism and that human-induced wake motion enhances compartment-to-compartment transport. The effect of human activity on contaminant transport may be important in design and operation of clean or isolation rooms in chemical or pharmaceutical industries and intensive care units for airborne infectious disease control in a hospital. The present simulations demonstrate details of contaminant transport in such indoor environments during human motion events and show that simulation-based sensitivity analysis can be utilized for the diagnosis of contaminant infiltration and for better environmental protection. © 2011 John Wiley & Sons A/S.

  15. Stochastic dynamics of penetrable rods in one dimension: occupied volume and spatial order.

    PubMed

    Craven, Galen T; Popov, Alexander V; Hernandez, Rigoberto

    2013-06-28

    The occupied volume of a penetrable hard rod (HR) system in one dimension is probed through the use of molecular dynamics simulations. In these dynamical simulations, collisions between penetrable rods are governed by a stochastic penetration algorithm (SPA), which allows for rods to either interpenetrate with a probability δ, or collide elastically otherwise. The limiting values of this parameter, δ = 0 and δ = 1, correspond to the HR and the ideal limits, respectively. At intermediate values, 0 < δ < 1, mixing of mutually exclusive and independent events is observed, making prediction of the occupied volume nontrivial. At high hard core volume fractions φ0, the occupied volume expression derived by Rikvold and Stell [J. Chem. Phys. 82, 1014 (1985)] for permeable systems does not accurately predict the occupied volume measured from the SPA simulations. Multi-body effects contribute significantly to the pair correlation function g2(r) and the simplification by Rikvold and Stell that g2(r) = δ in the penetrative region is observed to be inaccurate for the SPA model. We find that an integral over the penetrative region of g2(r) is the principal quantity that describes the particle overlap ratios corresponding to the observed penetration probabilities. Analytic formulas are developed to predict the occupied volume of mixed systems and agreement is observed between these theoretical predictions and the results measured from simulation.

  16. SU-E-QI-11: Measurement of Renal Pyruvate-To-Lactate Exchange with Hyperpolarized 13C MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamson, E; Johnson, K; Fain, S

    Purpose: Previous work [1] modeling the metabolic flux between hyperpolarized [1-13C]pyruvate and [1-13C]lactate in magnetic resonance spectroscopic imaging (MRSI) experiments failed to account for vascular signal artifacts. Here, we investigate a method to minimize the vascular signal and its impact on the fidelity of metabolic modeling. Methods: MRSI was simulated for renal metabolism in MATLAB both with and without bipolar gradients. The resulting data were fit to a two-site exchange model [1], and the effects of vascular partial volume artifacts on kinetic modeling were assessed. Bipolar gradients were then incorporated into a gradient echo sequence to validate the simulations experimentally.more » The degree of diffusion weighting (b = 32 s/mm{sup 2}) was determined empirically from 1H imaging of murine renal vascular signal. The method was then tested in vivo using MRSI with bipolar gradients following injection of hyperpolarized [1-{sup 13}C]pyruvate (∼80 mM at 20% polarization). Results: In simulations, vascular signal contaminated the renal metabolic signal at resolutions as high as 2 × 2 mm{sup 2} due to partial volume effects. The apparent exchange rate from pyruvate to lactate (k{sub p}) was underestimated in the presence of these artifacts due to contaminating pyruvate signal. Incorporation of bipolar gradients suppressed vascular signal and improved the accuracy of kp estimation. Experimentally, the in vivo results supported the ability of bipolar gradients to suppress vascular signal. The in vivo exchange rate increased, as predicted in simulations, from k{sub p} = 0.012 s-{sup 1} to k{sub p} = 0.020-{sup 1} after vascular signal suppression. Conclusion: We have demonstrated the limited accuracy of the two-site exchange model in the presence of vascular partial volume artifacts. The addition of bipolar gradients suppressed vascular signal and improved model accuracy in simulations. Bipolar gradients largely affected kp estimation in vivo. Currently, slow-flowing spins in small vessels and capillaries are only partially suppressed, so further improvement is possible. Funding support: Seed Grant from the Radiological Society of North America, GE Healthcare, University of Wisconsin Graduate School.« less

  17. Accuracy of Monte Carlo photon transport simulation in characterizing brachytherapy dosimeter energy-response artefacts.

    PubMed

    Das, R K; Li, Z; Perera, H; Williamson, J F

    1996-06-01

    Practical dosimeters in brachytherapy, such as thermoluminescent dosimeters (TLD) and diodes, are usually calibrated against low-energy megavoltage beams. To measure absolute dose rate near a brachytherapy source, it is necessary to establish the energy response of the detector relative to that of the calibration energy. The purpose of this paper is to assess the accuracy of Monte Carlo photon transport (MCPT) simulation in modelling the absolute detector response as a function of detector geometry and photon energy. We have exposed two different sizes of TLD-100 (LiF chips) and p-type silicon diode detectors to calibrated 60Co, HDR source (192Ir) and superficial x-ray beams. For the Scanditronix electron-field diode, the relative detector response, defined as the measured detector readings per measured unit of air kerma, varied from 38.46 V cGy-1 (40 kVp beam) to 6.22 V cGy-1 (60Co beam). Similarly for the large and small chips the same quantity varied from 2.08-3.02 nC cGy-1 and 0.171-0.244 nC cGy-1, respectively. Monte Carlo simulation was used to calculate the absorbed dose to the active volume of the detector per unit air kerma. If the Monte Carlo simulation is accurate, then the absolute detector response, which is defined as the measured detector reading per unit dose absorbed by the active detector volume, and is calculated by Monte Carlo simulation, should be a constant. For the diode, the absolute response is 5.86 +/- 0.15 (V cGy-1). For TLDs of size 3 x 3 x 1 mm3 the absolute response is 2.47 +/- 0.07 (nC cGy-1) and for TLDs of 1 x 1 x 1 mm3 it is 0.201 +/- 0.008 (nC cGy-1). From the above results we can conclude that the absolute response function of detectors (TLDs and diodes) is directly proportional to absorbed dose by the active volume of the detector and is independent of beam quality.

  18. Enhanced gas absorption in the ionic liquid 1-n-hexyl-3-methylimidazolium bis(trifluoromethylsulfonyl)amide ([hmim][Tf2N]) confined in silica slit pores: a molecular simulation study.

    PubMed

    Shi, Wei; Luebke, David R

    2013-05-07

    Two-dimensional NPxyT and isostress-osmotic (N2PxyTf1) Monte Carlo simulations were used to compute the density and gas absorption properties of the ionic liquid (IL) 1-n-hexyl-3-methylimidazolium bis(trifluoromethylsulfonyl)amide ([hmim][Tf2N]) confined in silica slit pores (25-45 Å). Self-diffusivity values for both gas and IL were calculated from NVE molecular dynamics simulations using both smooth and atomistic potential models for silica. The simulations showed that the molar volume of [hmim][Tf2N] confined in 25-45-Å silica slit pores is 12-31% larger than that of the bulk IL at 313-573 K and 1 bar. The amounts of CO2, H2, and N2 absorbed in the confined IL are 1.1-3 times larger than those in the bulk IL because of the larger molar volume of the confined IL compared to the bulk IL. The CO2, N2, and H2 molecules are generally absorbed close to the silica wall where the IL density is very low. This arrangement causes the self-diffusivities of these gases in the confined IL to be 2-8 times larger than those in the bulk IL at 298-573 K. The solubilities of water in the confined and bulk ILs are similar, which is likely due to strong water interactions with [hmim][Tf2N] through hydrogen bonding, so that the molar volume of the confined IL plays a less important role in determining the H2O solubility. Water molecules are largely absorbed in the IL-rich region rather than close to the silica wall. The self-diffusivities of water correlate with those of the confined IL. The confined IL exhibits self-diffusivities larger than those of the bulk IL at lower temperatures, but smaller than those of the bulk IL at higher temperatures. The findings from our simulations are consistent with available experimental data for similar confined IL systems.

  19. Propagation characteristics of pulverized coal and gas two-phase flow during an outburst.

    PubMed

    Zhou, Aitao; Wang, Kai; Fan, Lingpeng; Tao, Bo

    2017-01-01

    Coal and gas outbursts are dynamic failures that can involve the ejection of thousands tons of pulverized coal, as well as considerable volumes of gas, into a limited working space within a short period. The two-phase flow of gas and pulverized coal that occurs during an outburst can lead to fatalities and destroy underground equipment. This article examines the interaction mechanism between pulverized coal and gas flow. Based on the role of gas expansion energy in the development stage of outbursts, a numerical simulation method is proposed for investigating the propagation characteristics of the two-phase flow. This simulation method was verified by a shock tube experiment involving pulverized coal and gas flow. The experimental and simulated results both demonstrate that the instantaneous ejection of pulverized coal and gas flow can form outburst shock waves. These are attenuated along the propagation direction, and the volume fraction of pulverized coal in the two-phase flow has significant influence on attenuation of the outburst shock wave. As a whole, pulverized coal flow has a negative impact on gas flow, which makes a great loss of large amounts of initial energy, blocking the propagation of gas flow. According to comparison of numerical results for different roadway types, the attenuation effect of T-type roadways is best. In the propagation of shock wave, reflection and diffraction of shock wave interact through the complex roadway types.

  20. Propagation characteristics of pulverized coal and gas two-phase flow during an outburst

    PubMed Central

    Zhou, Aitao; Wang, Kai; Fan, Lingpeng; Tao, Bo

    2017-01-01

    Coal and gas outbursts are dynamic failures that can involve the ejection of thousands tons of pulverized coal, as well as considerable volumes of gas, into a limited working space within a short period. The two-phase flow of gas and pulverized coal that occurs during an outburst can lead to fatalities and destroy underground equipment. This article examines the interaction mechanism between pulverized coal and gas flow. Based on the role of gas expansion energy in the development stage of outbursts, a numerical simulation method is proposed for investigating the propagation characteristics of the two-phase flow. This simulation method was verified by a shock tube experiment involving pulverized coal and gas flow. The experimental and simulated results both demonstrate that the instantaneous ejection of pulverized coal and gas flow can form outburst shock waves. These are attenuated along the propagation direction, and the volume fraction of pulverized coal in the two-phase flow has significant influence on attenuation of the outburst shock wave. As a whole, pulverized coal flow has a negative impact on gas flow, which makes a great loss of large amounts of initial energy, blocking the propagation of gas flow. According to comparison of numerical results for different roadway types, the attenuation effect of T-type roadways is best. In the propagation of shock wave, reflection and diffraction of shock wave interact through the complex roadway types. PMID:28727738

  1. Adaptive finite volume methods with well-balanced Riemann solvers for modeling floods in rugged terrain: Application to the Malpasset dam-break flood (France, 1959)

    USGS Publications Warehouse

    George, D.L.

    2011-01-01

    The simulation of advancing flood waves over rugged topography, by solving the shallow-water equations with well-balanced high-resolution finite volume methods and block-structured dynamic adaptive mesh refinement (AMR), is described and validated in this paper. The efficiency of block-structured AMR makes large-scale problems tractable, and allows the use of accurate and stable methods developed for solving general hyperbolic problems on quadrilateral grids. Features indicative of flooding in rugged terrain, such as advancing wet-dry fronts and non-stationary steady states due to balanced source terms from variable topography, present unique challenges and require modifications such as special Riemann solvers. A well-balanced Riemann solver for inundation and general (non-stationary) flow over topography is tested in this context. The difficulties of modeling floods in rugged terrain, and the rationale for and efficacy of using AMR and well-balanced methods, are presented. The algorithms are validated by simulating the Malpasset dam-break flood (France, 1959), which has served as a benchmark problem previously. Historical field data, laboratory model data and other numerical simulation results (computed on static fitted meshes) are shown for comparison. The methods are implemented in GEOCLAW, a subset of the open-source CLAWPACK software. All the software is freely available at. Published in 2010 by John Wiley & Sons, Ltd.

  2. The strength and dislocation microstructure evolution in superalloy microcrystals

    NASA Astrophysics Data System (ADS)

    Hussein, Ahmed M.; Rao, Satish I.; Uchic, Michael D.; Parthasarathay, Triplicane A.; El-Awady, Jaafar A.

    2017-02-01

    In this work, the evolution of the dislocations microstructure in single crystal two-phase superalloy microcrystals under monotonic loading has been studied using the three-dimensional discrete dislocation dynamics (DDD) method. The DDD framework has been extended to properly handle the collective behavior of dislocations and their interactions with large collections of arbitrary shaped precipitates. Few constraints are imposed on the initial distribution of the dislocations or the precipitates, and the extended DDD framework can support experimentally-obtained precipitate geometries. Full tracking of the creation and destruction of anti-phase boundaries (APB) is accounted for. The effects of the precipitate volume fraction, APB energy, precipitate size, and crystal size on the deformation of superalloy microcrystals have been quantified. Correlations between the precipitate microstructure and the dominant deformation features, such as dislocation looping versus precipitate shearing, are also discussed. It is shown that the mechanical strength is independent of the crystal size, increases linearly with increasing the volume fraction, follows a near square-root relationship with the APB energy and an inverse square-root relationship with the precipitate size. Finally, the flow strength in simulations having initial dislocation pair sources show a flow strength that is about one half of that predicted from simulations starting with single dislocation sources. The method developed can be used, with minimal extensions, to simulate dislocation microstructure evolution in general multiphase materials.

  3. Effect of particle volume fraction on the settling velocity of volcanic ash particles: insights from joint experimental and numerical simulations

    PubMed Central

    Del Bello, Elisabetta; Taddeucci, Jacopo; de’ Michieli Vitturi, Mattia; Scarlato, Piergiorgio; Andronico, Daniele; Scollo, Simona; Kueppers, Ulrich; Ricci, Tullio

    2017-01-01

    Most of the current ash transport and dispersion models neglect particle-fluid (two-way) and particle-fluid plus particle-particle (four-way) reciprocal interactions during particle fallout from volcanic plumes. These interactions, a function of particle concentration in the plume, could play an important role, explaining, for example, discrepancies between observed and modelled ash deposits. Aiming at a more accurate prediction of volcanic ash dispersal and sedimentation, the settling of ash particles at particle volume fractions (ϕp) ranging 10−7-10−3 was performed in laboratory experiments and reproduced by numerical simulations that take into account first the two-way and then the four-way coupling. Results show that the velocity of particles settling together can exceed the velocity of particles settling individually by up to 4 times for ϕp ~ 10−3. Comparisons between experimental and simulation results reveal that, during the sedimentation process, the settling velocity is largely enhanced by particle-fluid interactions but partly hindered by particle-particle interactions with increasing ϕp. Combining the experimental and numerical results, we provide an empirical model allowing correction of the settling velocity of particles of any size, density, and shape, as a function of ϕp. These corrections will impact volcanic plume modelling results as well as remote sensing retrieval techniques for plume parameters. PMID:28045056

  4. Effect of particle volume fraction on the settling velocity of volcanic ash particles: insights from joint experimental and numerical simulations.

    PubMed

    Del Bello, Elisabetta; Taddeucci, Jacopo; De' Michieli Vitturi, Mattia; Scarlato, Piergiorgio; Andronico, Daniele; Scollo, Simona; Kueppers, Ulrich; Ricci, Tullio

    2017-01-03

    Most of the current ash transport and dispersion models neglect particle-fluid (two-way) and particle-fluid plus particle-particle (four-way) reciprocal interactions during particle fallout from volcanic plumes. These interactions, a function of particle concentration in the plume, could play an important role, explaining, for example, discrepancies between observed and modelled ash deposits. Aiming at a more accurate prediction of volcanic ash dispersal and sedimentation, the settling of ash particles at particle volume fractions (ϕ p ) ranging 10 -7 -10 -3 was performed in laboratory experiments and reproduced by numerical simulations that take into account first the two-way and then the four-way coupling. Results show that the velocity of particles settling together can exceed the velocity of particles settling individually by up to 4 times for ϕ p  ~ 10 -3 . Comparisons between experimental and simulation results reveal that, during the sedimentation process, the settling velocity is largely enhanced by particle-fluid interactions but partly hindered by particle-particle interactions with increasing ϕ p . Combining the experimental and numerical results, we provide an empirical model allowing correction of the settling velocity of particles of any size, density, and shape, as a function of ϕ p . These corrections will impact volcanic plume modelling results as well as remote sensing retrieval techniques for plume parameters.

  5. First Principles Simulations of P-V-T Unreacted Equation of State of LLM-105

    NASA Astrophysics Data System (ADS)

    Manaa, Riad; Kuo, I.-Feng; Fried, Laurence

    2015-03-01

    Equations of states (EOS) of unreacted energetic materials extending to high-pressure and temperatures regimes are of particular interest since they provide fundamental information about the associated thermodynamic properties of these materials at extreme conditions. Very often, experimental and computational studies focus only on determining a pressure-volume relationship at ambient to moderate temperatures. Adding elevated temperature data to construct a P-V-T EOS is highly desirable to extend the range of materials properties. Atomic scale molecular dynamics simulations are particularly suited for such a construct since EOSs are the manifestation of the underlying atomic interactions. In this work, we report dispersion-corrected density functional theoretical calculations of unreacted equation of state (EOS) of the energetic material 2,6-diamino-3, 5-dinitropyrazine-1-oxide (LLM-105). We performed large-scale constant-volume and temperature molecular dynamics simulations for pressures ranging from ambient to 35 GPa, and temperatures ranging from 300 K to 1000 K. These calculations allowed us to construct an unreacted P-V-T EOS and obtain bulk modulus for each P-V isotherm. We also report the thermal expansion coefficient of LLM-105 in the temperature range of this study. This work performed under the auspices of the U.S. Department of Energy Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih, E-mail: youngyih@skku.edu

    Purpose: In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Methods: Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer’s machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient’s body contour was reconstructed. The accuracy of the image was confirmed against the CT imagemore » of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine’s components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. Results: All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. Conclusions: The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.« less

  7. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    NASA Astrophysics Data System (ADS)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  8. A new plant chamber facility, PLUS, coupled to the atmosphere simulation chamber SAPHIR

    NASA Astrophysics Data System (ADS)

    Hohaus, T.; Kuhn, U.; Andres, S.; Kaminski, M.; Rohrer, F.; Tillmann, R.; Wahner, A.; Wegener, R.; Yu, Z.; Kiendler-Scharr, A.

    2016-03-01

    A new PLant chamber Unit for Simulation (PLUS) for use with the atmosphere simulation chamber SAPHIR (Simulation of Atmospheric PHotochemistry In a large Reaction Chamber) has been built and characterized at the Forschungszentrum Jülich GmbH, Germany. The PLUS chamber is an environmentally controlled flow-through plant chamber. Inside PLUS the natural blend of biogenic emissions of trees is mixed with synthetic air and transferred to the SAPHIR chamber, where the atmospheric chemistry and the impact of biogenic volatile organic compounds (BVOCs) can be studied in detail. In PLUS all important environmental parameters (e.g., temperature, photosynthetically active radiation (PAR), soil relative humidity (RH)) are well controlled. The gas exchange volume of 9.32 m3 which encloses the stem and the leaves of the plants is constructed such that gases are exposed to only fluorinated ethylene propylene (FEP) Teflon film and other Teflon surfaces to minimize any potential losses of BVOCs in the chamber. Solar radiation is simulated using 15 light-emitting diode (LED) panels, which have an emission strength up to 800 µmol m-2 s-1. Results of the initial characterization experiments are presented in detail. Background concentrations, mixing inside the gas exchange volume, and transfer rate of volatile organic compounds (VOCs) through PLUS under different humidity conditions are explored. Typical plant characteristics such as light- and temperature- dependent BVOC emissions are studied using six Quercus ilex trees and compared to previous studies. Results of an initial ozonolysis experiment of BVOC emissions from Quercus ilex at typical atmospheric concentrations inside SAPHIR are presented to demonstrate a typical experimental setup and the utility of the newly added plant chamber.

  9. Osmosis-Based Pressure Generation: Dynamics and Application

    PubMed Central

    Li, Suyi; Billeh, Yazan N.; Wang, K. W.; Mayer, Michael

    2014-01-01

    This paper describes osmotically-driven pressure generation in a membrane-bound compartment while taking into account volume expansion, solute dilution, surface area to volume ratio, membrane hydraulic permeability, and changes in osmotic gradient, bulk modulus, and degree of membrane fouling. The emphasis lies on the dynamics of pressure generation; these dynamics have not previously been described in detail. Experimental results are compared to and supported by numerical simulations, which we make accessible as an open source tool. This approach reveals unintuitive results about the quantitative dependence of the speed of pressure generation on the relevant and interdependent parameters that will be encountered in most osmotically-driven pressure generators. For instance, restricting the volume expansion of a compartment allows it to generate its first 5 kPa of pressure seven times faster than without a restraint. In addition, this dynamics study shows that plants are near-ideal osmotic pressure generators, as they are composed of many small compartments with large surface area to volume ratios and strong cell wall reinforcements. Finally, we demonstrate two applications of an osmosis-based pressure generator: actuation of a soft robot and continuous volume delivery over long periods of time. Both applications do not need an external power source but rather take advantage of the energy released upon watering the pressure generators. PMID:24614529

  10. Quantifying Density Fluctuations in Volumes of All Shapes and Sizes Using Indirect Umbrella Sampling

    NASA Astrophysics Data System (ADS)

    Patel, Amish J.; Varilly, Patrick; Chandler, David; Garde, Shekhar

    2011-10-01

    Water density fluctuations are an important statistical mechanical observable and are related to many-body correlations, as well as hydrophobic hydration and interactions. Local water density fluctuations at a solid-water surface have also been proposed as a measure of its hydrophobicity. These fluctuations can be quantified by calculating the probability, P v ( N), of observing N waters in a probe volume of interest v. When v is large, calculating P v ( N) using molecular dynamics simulations is challenging, as the probability of observing very few waters is exponentially small, and the standard procedure for overcoming this problem (umbrella sampling in N) leads to undesirable impulsive forces. Patel et al. (J. Phys. Chem. B 114:1632, 2010) have recently developed an indirect umbrella sampling (INDUS) method, that samples a coarse-grained particle number to obtain P v ( N) in cuboidal volumes. Here, we present and demonstrate an extension of that approach to volumes of other basic shapes, like spheres and cylinders, as well as to collections of such volumes. We further describe the implementation of INDUS in the NPT ensemble and calculate P v ( N) distributions over a broad range of pressures. Our method may be of particular interest in characterizing the hydrophobicity of interfaces of proteins, nanotubes and related systems.

  11. Introducing the Illustris Project: simulating the coevolution of dark and visible matter in the Universe

    NASA Astrophysics Data System (ADS)

    Vogelsberger, Mark; Genel, Shy; Springel, Volker; Torrey, Paul; Sijacki, Debora; Xu, Dandan; Snyder, Greg; Nelson, Dylan; Hernquist, Lars

    2014-10-01

    We introduce the Illustris Project, a series of large-scale hydrodynamical simulations of galaxy formation. The highest resolution simulation, Illustris-1, covers a volume of (106.5 Mpc)3, has a dark mass resolution of 6.26 × 106 M⊙, and an initial baryonic matter mass resolution of 1.26 × 106 M⊙. At z = 0 gravitational forces are softened on scales of 710 pc, and the smallest hydrodynamical gas cells have an extent of 48 pc. We follow the dynamical evolution of 2 × 18203 resolution elements and in addition passively evolve 18203 Monte Carlo tracer particles reaching a total particle count of more than 18 billion. The galaxy formation model includes: primordial and metal-line cooling with self-shielding corrections, stellar evolution, stellar feedback, gas recycling, chemical enrichment, supermassive black hole growth, and feedback from active galactic nuclei. Here we describe the simulation suite, and contrast basic predictions of our model for the present-day galaxy population with observations of the local universe. At z = 0 our simulation volume contains about 40 000 well-resolved galaxies covering a diverse range of morphologies and colours including early-type, late-type and irregular galaxies. The simulation reproduces reasonably well the cosmic star formation rate density, the galaxy luminosity function, and baryon conversion efficiency at z = 0. It also qualitatively captures the impact of galaxy environment on the red fractions of galaxies. The internal velocity structure of selected well-resolved disc galaxies obeys the stellar and baryonic Tully-Fisher relation together with flat circular velocity curves. In the well-resolved regime, the simulation reproduces the observed mix of early-type and late-type galaxies. Our model predicts a halo mass dependent impact of baryonic effects on the halo mass function and the masses of haloes caused by feedback from supernova and active galactic nuclei.

  12. Finite volume analysis of temperature effects induced by active MRI implants: 2. Defects on active MRI implants causing hot spots.

    PubMed

    Busch, Martin H J; Vollmann, Wolfgang; Grönemeyer, Dietrich H W

    2006-05-26

    Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach (1/4) of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q V(ind) < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q V(ind) > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for.

  13. Finite volume analysis of temperature effects induced by active MRI implants: 2. Defects on active MRI implants causing hot spots

    PubMed Central

    Busch, Martin HJ; Vollmann, Wolfgang; Grönemeyer, Dietrich HW

    2006-01-01

    Background Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach ¼ of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. Methods First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. Results The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. Conclusion The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q Vind < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q Vind > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for. PMID:16729878

  14. Hydrodynamic Simulations and Tomographic Reconstructions of the Intergalactic Medium

    NASA Astrophysics Data System (ADS)

    Stark, Casey William

    The Intergalactic Medium (IGM) is the dominant reservoir of matter in the Universe from which the cosmic web and galaxies form. The structure and physical state of the IGM provides insight into the cosmological model of the Universe, the origin and timeline of the reionization of the Universe, as well as being an essential ingredient in our understanding of galaxy formation and evolution. Our primary handle on this information is a signal known as the Lyman-alpha forest (or Ly-alpha forest) -- the collection of absorption features in high-redshift sources due to intervening neutral hydrogen, which scatters HI Ly-alpha photons out of the line of sight. The Ly-alpha forest flux traces density fluctuations at high redshift and at moderate overdensities, making it an excellent tool for mapping large-scale structure and constraining cosmological parameters. Although the computational methodology for simulating the Ly-alpha forest has existed for over a decade, we are just now approaching the scale of computing power required to simultaneously capture large cosmological scales and the scales of the smallest absorption systems. My thesis focuses on using simulations at the edge of modern computing to produce precise predictions of the statistics of the Ly-alpha forest and to better understand the structure of the IGM. In the first part of my thesis, I review the state of hydrodynamic simulations of the IGM, including pitfalls of the existing under-resolved simulations. Our group developed a new cosmological hydrodynamics code to tackle the computational challenge, and I developed a distributed analysis framework to compute flux statistics from our simulations. I present flux statistics derived from a suite of our large hydrodynamic simulations and demonstrate convergence to the per cent level. I also compare flux statistics derived from simulations using different discretizations and hydrodynamic schemes (Eulerian finite volume vs. smoothed particle hydrodynamics) and discuss differences in their convergence behavior, their overall agreement, and the implications for cosmological constraints. In the second part of my thesis, I present a tomographic reconstruction method that allows us to make 3D maps of the IGM with Mpc resolution. In order to make reconstructions of large surveys computationally feasible, I developed a new Wiener Filter application with an algorithm specialized to our problem, which significantly reduces the space and time complexity compared to previous implementations. I explore two scientific applications of the maps: finding protoclusters by searching the maps for large, contiguous regions of low flux and finding cosmic voids by searching the maps for regions of high flux. Using a large N-body simulation, I identify and characterize both protoclusters and voids at z = 2.5, in the middle of the redshift range being mapped by ongoing surveys. I provide simple methods for identifying protocluster and void candidates in the tomographic flux maps, and then test them on mock surveys and reconstructions. I present forecasts for sample purity and completeness and other scientific applications of these large, high-redshift objects.

  15. Building Simulation Modelers are we big-data ready?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan

    Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performancemore » simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical aspects of managing big data, the paper details design of experiments in anticipation of large volumes of data. The cost of re-reading output into an analysis program is elaborated and analysis techniques that perform analysis in-situ with the simulations as they are run are discussed. The paper concludes with an example and elaboration of the tipping point where it becomes more expensive to store the output than re-running a set of simulations.« less

  16. Transverse slot antennas for high field MRI

    PubMed Central

    Lattanzi, Riccardo; Lakshmanan, Karthik; Brown, Ryan; Deniz, Cem M.; Sodickson, Daniel K.; Collins, Christopher M.

    2018-01-01

    Purpose Introduce a novel coil design using an electrically long transversely oriented slot in a conductive sheet. Theory and Methods Theoretical considerations, numerical simulations, and experimental measurements are presented for transverse slot antennas as compared with electric dipole antennas. Results Simulations show improved central and average transmit and receive efficiency, as well as larger coverage in the transverse plane, for a single slot as compared to a single dipole element. Experiments on a body phantom confirm the simulation results for a slot antenna relative to a dipole, demonstrating a large region of relatively high sensitivity and homogeneity. Images in a human subject also show a large imaging volume for a single slot and six slot antenna array. High central transmit efficiency was observed for slot arrays relative to dipole arrays. Conclusion Transverse slots can exhibit improved sensitivity and larger field of view compared with traditional conductive dipoles. Simulations and experiments indicate high potential for slot antennas in high field MRI. Magn Reson Med 80:1233–1242, 2018. © 2018 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:29388250

  17. Galaxy-halo alignments in the Horizon-AGN cosmological hydrodynamical simulation

    NASA Astrophysics Data System (ADS)

    Chisari, N. E.; Koukoufilippas, N.; Jindal, A.; Peirani, S.; Beckmann, R. S.; Codis, S.; Devriendt, J.; Miller, L.; Dubois, Y.; Laigle, C.; Slyz, A.; Pichon, C.

    2017-11-01

    Intrinsic alignments of galaxies are a significant astrophysical systematic affecting cosmological constraints from weak gravitational lensing. Obtaining numerical predictions from hydrodynamical simulations of expected survey volumes is expensive, and a cheaper alternative relies on populating large dark matter-only simulations with accurate models of alignments calibrated on smaller hydrodynamical runs. This requires connecting the shapes and orientations of galaxies to those of dark matter haloes and to the large-scale structure. In this paper, we characterize galaxy-halo alignments in the Horizon-AGN cosmological hydrodynamical simulation. We compare the shapes and orientations of galaxies in the redshift range of 0 < z < 3 to those of their embedding dark matter haloes, and to the matching haloes of a twin dark-matter only run with identical initial conditions. We find that galaxy ellipticities, in general, cannot be predicted directly from halo ellipticities. The mean misalignment angle between the minor axis of a galaxy and its embedding halo is a function of halo mass, with residuals arising from the dependence of alignment on galaxy type, but not on environment. Haloes are much more strongly aligned among themselves than galaxies, and they decrease their alignment towards low redshift. Galaxy alignments compete with this effect, as galaxies tend to increase their alignment with haloes towards low redshift. We discuss the implications of these results for current halo models of intrinsic alignments and suggest several avenues for improvement.

  18. A Universe of ultradiffuse galaxies: theoretical predictions from ΛCDM simulations

    NASA Astrophysics Data System (ADS)

    Rong, Yu; Guo, Qi; Gao, Liang; Liao, Shihong; Xie, Lizhi; Puzia, Thomas H.; Sun, Shuangpeng; Pan, Jun

    2017-10-01

    A particular population of galaxies have drawn much interest recently, which are as faint as typical dwarf galaxies but have the sizes as large as L* galaxies, the so called ultradiffuse galaxies (UDGs). The lack of tidal features of UDGs in dense environments suggests that their host haloes are perhaps as massive as that of the Milky Way. On the other hand, galaxy formation efficiency should be much higher in the haloes of such masses. Here, we use the model galaxy catalogue generated by populating two large simulations: the Millennium-II cosmological simulation and Phoenix simulations of nine big clusters with the semi-analytic galaxy formation model. This model reproduces remarkably well the observed properties of UDGs in the nearby clusters, including the abundance, profile, colour and morphology, etc. We search for UDG candidates using the public data and find two UDG candidates in our Local Group and 23 in our Local Volume, in excellent agreement with the model predictions. We demonstrate that UDGs are genuine dwarf galaxies, formed in the haloes of ˜1010 M⊙. It is the combination of the late formation time and high spins of the host haloes that results in the spatially extended feature of this particular population. The lack of tidal disruption features of UDGs in clusters can also be explained by their late infall-time.

  19. First Order Statistics of Speckle around a Scatterer Volume Density Edge and Edge Detection in Ultrasound Images.

    NASA Astrophysics Data System (ADS)

    Li, Yue

    1990-01-01

    Ultrasonic imaging plays an important role in medical imaging. But the images exhibit a granular structure, commonly known as speckle. The speckle tends to mask the presence of low-contrast lesions and reduces the ability of a human observer to resolve fine details. Our interest in this research is to examine the problem of edge detection and come up with methods for improving the visualization of organ boundaries and tissue inhomogeneity edges. An edge in an image can be formed either by acoustic impedance change or by scatterer volume density change (or both). The echo produced from these two kinds of edges has different properties. In this work, it has been proved that the echo from a scatterer volume density edge is the Hilbert transform of the echo from a rough impedance boundary (except for a constant) under certain conditions. This result can be used for choosing the correct signal to transmit to optimize the performance of edge detectors and characterizing an edge. The signal to noise ratio of the echo produced by a scatterer volume density edge is also obtained. It is found that: (1) By transmitting a signal with high bandwidth ratio and low center frequency, one can obtain a higher signal to noise ratio. (2) For large area edges, the farther the transducer is from the edge, the larger is the signal to noise ratio. But for small area edges, the nearer the transducer is to the edge, the larger is the signal to noise ratio. These results enable us to maximize the signal to noise ratio by adjusting these parameters. (3) The signal to noise ratio is not only related to the ratio of scatterer volume densities at the edge, but also related to the absolute value of scatterer volume densities. Some of these results have been proved through simulation and experiment. Different edge detection methods have been used to detect simulated scatterer volume density edges to compare their performance. A so-called interlaced array method has been developed for speckle reduction in the images formed by synthetic aperture focussing technique, and experiments have been done to evaluate its performance.

  20. A computer simulation of free-volume distributions and related structural properties in a model lipid bilayer.

    PubMed Central

    Xiang, T X

    1993-01-01

    A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390

  1. Reevaluation of tephra volumes for the 1982 eruption of El Chichón volcano, Mexico

    NASA Astrophysics Data System (ADS)

    Nathenson, M.; Fierstein, J.

    2012-12-01

    Reevaluation of tephra volumes for the 1982 eruption of El Chichón volcano, Mexico Manuel Nathenson and Judy Fierstein U.S. Geological Survey, 345 Middlefield Road MS-910, Menlo Park, CA 94025 In a recent numerical simulation of tephra transport and deposition for the 1982 eruption, Bonasia et al. (2012) used masses for the tephra layers (A-1, B, and C) based on the volume data of Carey and Sigurdsson (1986) calculated by the methodology of Rose et al. (1973). For reasons not clear, using the same methodology we obtained volumes for layers A-1 and B much less than those previously reported. For example, for layer A-1, Carey and Sigurdsson (1986) reported a volume of 0.60 km3, whereas we obtain a volume of 0.23 km3. Moreover, applying the more recent methodology of tephra-volume calculation (Pyle, 1989; Fierstein and Nathenson, 1992) and using the isopachs maps in Carey and Sigurdsson (1986), we calculate a total tephra volume of 0.52 km3 (A-1, 0.135; B, 0.125; and C, 0.26 km3). In contrast, Carey and Sigurdsson (1986) report a much larger total volume of 2.19 km3. Such disagreement not only reflects the differing methodologies, but we propose that the volumes calculated with the methodology of Pyle and of Fierstein and Nathenson—involving the use of straight lines on a log thickness versus square root of area plot—better represent the actual fall deposits. After measuring the areas for the isomass contours for the HAZMAPP and FALL3D simulations in Bonasia et al. (2012), we applied the Pyle-Fierstein and Nathenson methodology to calculate the tephra masses deposited on the ground. These masses from five of the simulations range from 70% to 110% of those reported by Carey and Sigurdsson (1986), whereas that for layer B in the HAZMAP calculation is 160%. In the Bonasia et al. (2012) study, the mass erupted by the volcano is a critical input used in the simulation to produce an ash cloud that deposits tephra on the ground. Masses on the ground (as calculated by us) for five of the simulations range from 20% to 46% of the masses used as simulation inputs, whereas that for layer B in the HAZMAP calculation is 74%. It is not clear why the percentages are so variable, nor why the output volumes are such small percentages of the input erupted mass. From our volume calculations, the masses on the ground from the simulations are factors of 2.3 to 10 times what was actually deposited. Given this finding from our reevaluation of volumes, the simulations appear to overestimate the hazards from eruptions of sizes that occurred at El Chichón. Bonasia, R., A. Costa, A. Folch, G. Macedonio, and L. Capra, (2012), Numerical simulation of tephra transport and deposition of the 1982 El Chichón eruption and implications for hazard assessment, J. Volc. Geotherm. Res., 231-232, 39-49. Carey, S. and H. Sigurdsson, (1986), The 1982 eruptions of El Chichon volcano, Mexico: Observations and numerical modelling of tephra-fall distribution, Bull. Volcanol., 48, 127-141. Fierstein, J., and M. Nathenson, (1992), Another look at the calculation of fallout tephra volumes, Bull. Volcanol., 54, 156-167. Pyle, D.M., (1989), The thickness, volume and grainsize of tephra fall deposits, Bull. Volcanol., 51, 1-15. Rose, W.I., Jr., S. Bonis, R.E. Stoiber, M. Keller, and T. Bickford, (1973), Studies of volcanic ash from two recent Central American eruptions, Bull. Volcanol., 37, 338-364.

  2. Cross-flow turbines: physical and numerical model studies towards improved array simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2015-12-01

    Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An additional sub-model is considered for injecting turbulence model scalar quantities based on actuator line element loading. Results are presented for the simulation of performance and wake dynamics of axial- and cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET grant 1150797.

  3. SQUEEZE-E: The Optimal Solution for Molecular Simulations with Periodic Boundary Conditions.

    PubMed

    Wassenaar, Tsjerk A; de Vries, Sjoerd; Bonvin, Alexandre M J J; Bekker, Henk

    2012-10-09

    In molecular simulations of macromolecules, it is desirable to limit the amount of solvent in the system to avoid spending computational resources on uninteresting solvent-solvent interactions. As a consequence, periodic boundary conditions are commonly used, with a simulation box chosen as small as possible, for a given minimal distance between images. Here, we describe how such a simulation cell can be set up for ensembles, taking into account a priori available or estimable information regarding conformational flexibility. Doing so ensures that any conformation present in the input ensemble will satisfy the distance criterion during the simulation. This helps avoid periodicity artifacts due to conformational changes. The method introduces three new approaches in computational geometry: (1) The first is the derivation of an optimal packing of ensembles, for which the mathematical framework is described. (2) A new method for approximating the α-hull and the contact body for single bodies and ensembles is presented, which is orders of magnitude faster than existing routines, allowing the calculation of packings of large ensembles and/or large bodies. 3. A routine is described for searching a combination of three vectors on a discretized contact body forming a reduced base for a lattice with minimal cell volume. The new algorithms reduce the time required to calculate packings of single bodies from minutes or hours to seconds. The use and efficacy of the method is demonstrated for ensembles obtained from NMR, MD simulations, and elastic network modeling. An implementation of the method has been made available online at http://haddock.chem.uu.nl/services/SQUEEZE/ and has been made available as an option for running simulations through the weNMR GRID MD server at http://haddock.science.uu.nl/enmr/services/GROMACS/main.php .

  4. Computer Program (HEVSIM) for Heavy Duty Vehicle Fuel Economy and Performance Simulation. Volume III.

    DOT National Transportation Integrated Search

    1981-09-01

    Volume III is the third and last volume of a three volume document describing the computer program HEVSIM. This volume includes appendices which list the HEVSIM program, sample part data, some typical outputs and updated nomenclature.

  5. Drizzle formation in stratocumulus clouds: Effects of turbulent mixing

    DOE PAGES

    Magaritz-Ronen, L.; Pinsky, M.; Khain, A.

    2016-02-17

    The mechanism of drizzle formation in shallow stratocumulus clouds and the effect of turbulent mixing on this process are investigated. A Lagrangian–Eularian model of the cloud-topped boundary layer is used to simulate the cloud measured during flight RF07 of the DYCOMS-II field experiment. The model contains ~ 2000 air parcels that are advected in a turbulence-like velocity field. In the model all microphysical processes are described for each Lagrangian air volume, and turbulent mixing between the parcels is also taken into account. It was found that the first large drops form in air volumes that are closest to adiabatic andmore » characterized by high humidity, extended residence near cloud top, and maximum values of liquid water content, allowing the formation of drops as a result of efficient collisions. The first large drops form near cloud top and initiate drizzle formation in the cloud. Drizzle is developed only when turbulent mixing of parcels is included in the model. Without mixing, the cloud structure is extremely inhomogeneous and the few large drops that do form in the cloud evaporate during their sedimentation. Lastly, it was found that turbulent mixing can delay the process of drizzle initiation but is essential for the further development of drizzle in the cloud.« less

  6. Drizzle formation in stratocumulus clouds: Effects of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magaritz-Ronen, L.; Pinsky, M.; Khain, A.

    The mechanism of drizzle formation in shallow stratocumulus clouds and the effect of turbulent mixing on this process are investigated. A Lagrangian–Eularian model of the cloud-topped boundary layer is used to simulate the cloud measured during flight RF07 of the DYCOMS-II field experiment. The model contains ~ 2000 air parcels that are advected in a turbulence-like velocity field. In the model all microphysical processes are described for each Lagrangian air volume, and turbulent mixing between the parcels is also taken into account. It was found that the first large drops form in air volumes that are closest to adiabatic andmore » characterized by high humidity, extended residence near cloud top, and maximum values of liquid water content, allowing the formation of drops as a result of efficient collisions. The first large drops form near cloud top and initiate drizzle formation in the cloud. Drizzle is developed only when turbulent mixing of parcels is included in the model. Without mixing, the cloud structure is extremely inhomogeneous and the few large drops that do form in the cloud evaporate during their sedimentation. Lastly, it was found that turbulent mixing can delay the process of drizzle initiation but is essential for the further development of drizzle in the cloud.« less

  7. Combined Recipe for Clinical Target Volume and Planning Target Volume Margins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stroom, Joep, E-mail: joep.stroom@fundacaochampalimaud.pt; Gilhuijs, Kenneth; Vieira, Sandra

    2014-03-01

    Purpose: To develop a combined recipe for clinical target volume (CTV) and planning target volume (PTV) margins. Methods and Materials: A widely accepted PTV margin recipe is M{sub geo} = aΣ{sub geo} + bσ{sub geo}, with Σ{sub geo} and σ{sub geo} standard deviations (SDs) representing systematic and random geometric uncertainties, respectively. On the basis of histopathology data of breast and lung tumors, we suggest describing the distribution of microscopic islets around the gross tumor volume (GTV) by a half-Gaussian with SD Σ{sub micro}, yielding as possible CTV margin recipe: M{sub micro} = ƒ(N{sub i}) × Σ{sub micro}, with N{sub i}more » the average number of microscopic islets per patient. To determine ƒ(N{sub i}), a computer model was developed that simulated radiation therapy of a spherical GTV with isotropic distribution of microscopic disease in a large group of virtual patients. The minimal margin that yielded D{sub min} <95% in maximally 10% of patients was calculated for various Σ{sub micro} and N{sub i}. Because Σ{sub micro} is independent of Σ{sub geo}, we propose they should be added quadratically, yielding for a combined GTV-to-PTV margin recipe: M{sub GTV-PTV} = √([aΣ{sub geo}]{sup 2} + [ƒ(N{sub i})Σ{sub micro}]{sup 2}) + bσ{sub geo}. This was validated by the computer model through numerous simultaneous simulations of microscopic and geometric uncertainties. Results: The margin factor ƒ(N{sub i}) in a relevant range of Σ{sub micro} and N{sub i} can be given by: ƒ(N{sub i}) = 1.4 + 0.8log(N{sub i}). Filling in the other factors found in our simulations (a = 2.1 and b = 0.8) yields for the combined recipe: M{sub GTV-PTV} = √((2.1Σ{sub geo}){sup 2} + ([1.4 + 0.8log(N{sub i})] × Σ{sub micro}){sup 2}) + 0.8σ{sub geo}. The average margin difference between the simultaneous simulations and the above recipe was 0.2 ± 0.8 mm (1 SD). Calculating M{sub geo} and M{sub micro} separately and adding them linearly overestimated PTVs by on average 5 mm. Margin recipes based on tumor control probability (TCP) instead of D{sub min} criteria yielded similar results. Conclusions: A general recipe for GTV-to-PTV margins is proposed, which shows that CTV and PTV margins should be added in quadrature instead of linearly.« less

  8. Direct numerical simulation of variable surface tension flows using a Volume-of-Fluid method

    NASA Astrophysics Data System (ADS)

    Seric, Ivana; Afkhami, Shahriar; Kondic, Lou

    2018-01-01

    We develop a general methodology for the inclusion of a variable surface tension coefficient into a Volume-of-Fluid based Navier-Stokes solver. This new numerical model provides a robust and accurate method for computing the surface gradients directly by finding the tangent directions on the interface using height functions. The implementation is applicable to both temperature and concentration dependent surface tension coefficient, along with the setups involving a large jump in the temperature between the fluid and its surrounding, as well as the situations where the concentration should be strictly confined to the fluid domain, such as the mixing of fluids with different surface tension coefficients. We demonstrate the applicability of our method to the thermocapillary migration of bubbles and the coalescence of drops characterized by a different surface tension coefficient.

  9. An analysis of the nucleon spectrum from lattice partially-quenched QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Armour; Allton, C. R.; Leinweber, Derek B.

    2010-09-01

    The chiral extrapolation of the nucleon mass, Mn, is investigated using data coming from 2-flavour partially-quenched lattice simulations. The leading one-loop corrections to the nucleon mass are derived for partially-quenched QCD. A large sample of lattice results from the CP-PACS Collaboration is analysed, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite-volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value ofmore » Mn in agreement with experiment. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.« less

  10. Jammed elastic shells - a 3D experimental soft frictionless granular system

    NASA Astrophysics Data System (ADS)

    Jose, Jissy; Blab, Gerhard A.; van Blaaderen, Alfons; Imhof, Arnout

    2015-03-01

    We present a new experimental system of monodisperse, soft, frictionless, fluorescent labelled elastic shells for the characterization of structure, universal scaling laws and force networks in 3D jammed matter. The interesting fact about these elastic shells is that they can reversibly deform and therefore serve as sensors of local stress in jammed matter. Similar to other soft particles, like emulsion droplets and bubbles in foam, the shells can be packed to volume fractions close to unity, which allows us to characterize the contact force distribution and universal scaling laws as a function of volume fraction, and to compare them with theoretical predictions and numerical simulations. However, our shells, unlike other soft particles, deform rather differently at large stresses. They deform without conserving their inner volume, by forming dimples at contact regions. At each contact one of the shells buckled with a dimple and the other remained spherical, closely resembling overlapping spheres. We conducted 3D quantitative analysis using confocal microscopy and image analysis routines specially developed for these particles. In addition, we analysed the randomness of the process of dimpling, which was found to be volume fraction dependent.

  11. Surface-initiated phase transition in solid hydrogen under the high-pressure compression

    NASA Astrophysics Data System (ADS)

    Lei, Haile; Lin, Wei; Wang, Kai; Li, Xibo

    2018-03-01

    The large-scale molecular dynamics simulations have been performed to understand the microscopic mechanism governing the phase transition of solid hydrogen under the high-pressure compression. These results demonstrate that the face-centered-cubic-to-hexagonal close-packed phase transition is initiated first at the surfaces at a much lower pressure than in the volume and then extends gradually from the surface to volume in the solid hydrogen. The infrared spectra from the surface are revealed to exhibit a different pressure-dependent feature from those of the volume during the high-pressure compression. It is thus deduced that the weakening intramolecular H-H bonds are always accompanied by hardening surface phonons through strengthening the intermolecular H2-H2 coupling at the surfaces with respect to the counterparts in the volume at high pressures. This is just opposite to the conventional atomic crystals, in which the surface phonons are softening. The high-pressure compression has further been predicted to force the atoms or molecules to spray out of surface to degrade the pressure. These results provide a glimpse of structural properties of solid hydrogen at the early stage during the high-pressure compression.

  12. A Marine Aerosol Reference Tank system as a breaking wave analogue for the production of foam and sea-spray aerosols

    NASA Astrophysics Data System (ADS)

    Stokes, M. D.; Deane, G. B.; Prather, K.; Bertram, T. H.; Ruppel, M. J.; Ryder, O. S.; Brady, J. M.; Zhao, D.

    2013-04-01

    In order to better understand the processes governing the production of marine aerosols a repeatable, controlled method for their generation is required. The Marine Aerosol Reference Tank (MART) has been designed to closely approximate oceanic conditions by producing an evolving bubble plume and surface foam patch. The tank utilizes an intermittently plunging sheet of water and large volume tank reservoir to simulate turbulence, plume and foam formation, and the water flow is monitored volumetrically and acoustically to ensure the repeatability of conditions.

  13. Research on influence factor about the dynamic characteristic of armored vehicle hydraulic-driven fan system

    NASA Astrophysics Data System (ADS)

    Chao, Zhiqiang; Mao, Feiyue; Liu, Xiangbo; Li, Huaying; Han, Shousong

    2017-01-01

    In view of the large power of armored vehicle cooling system, the demand for high fan speed control and energy saving, this paper expounds the basic composition and principle of hydraulic-driven fan system and establishes the mathematical model of the system. Through the simulation analysis of different parameters, such as displacement of motor and working volume of fan system, the influences of performance parameters on the dynamic characteristic of hydraulic-driven fan system are obtained, which can provide theoretical guidance for system optimization design.

  14. Application of evolutionary games to modeling carcinogenesis.

    PubMed

    Swierniak, Andrzej; Krzeslak, Michal

    2013-06-01

    We review a quite large volume of literature concerning mathematical modelling of processes related to carcinogenesis and the growth of cancer cell populations based on the theory of evolutionary games. This review, although partly idiosyncratic, covers such major areas of cancer-related phenomena as production of cytotoxins, avoidance of apoptosis, production of growth factors, motility and invasion, and intra- and extracellular signaling. We discuss the results of other authors and append to them some additional results of our own simulations dealing with the possible dynamics and/or spatial distribution of the processes discussed.

  15. LACIE performance predictor final operational capability program description, volume 3

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The requirements and processing logic for the LACIE Error Model program (LEM) are described. This program is an integral part of the Large Area Crop Inventory Experiment (LACIE) system. LEM is that portion of the LPP (LACIE Performance Predictor) which simulates the sample segment classification, strata yield estimation, and production aggregation. LEM controls repetitive Monte Carlo trials based on input error distributions to obtain statistical estimates of the wheat area, yield, and production at different levels of aggregation. LEM interfaces with the rest of the LPP through a set of data files.

  16. Large area MEMS based ultrasound device for cancer detection

    NASA Astrophysics Data System (ADS)

    Wodnicki, Robert; Thomenius, Kai; Ming Hooi, Fong; Sinha, Sumedha P.; Carson, Paul L.; Lin, Der-Song; Zhuang, Xuefeng; Khuri-Yakub, Pierre; Woychik, Charles

    2011-08-01

    We present image results obtained using a prototype ultrasound array that demonstrates the fundamental architecture for a large area MEMS based ultrasound device for detection of breast cancer. The prototype array consists of a tiling of capacitive Micromachined Ultrasound Transducers (cMUTs) that have been flip-chip attached to a rigid organic substrate. The pitch on the cMUT elements is 185 μm and the operating frequency is nominally 9 MHz. The spatial resolution of the new probe is comparable to those of production PZT probes; however the sensitivity is reduced by conditions that should be correctable. Simulated opposed-view image registration and Speed of Sound volume reconstruction results for ultrasound in the mammographic geometry are also presented.

  17. Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Crockett, Thomas W.

    1999-01-01

    This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.

  18. Risk for intracranial pressure increase related to enclosed air in post-craniotomy patients during air ambulance transport: a retrospective cohort study with simulation.

    PubMed

    Brändström, Helge; Sundelin, Anna; Hoseason, Daniela; Sundström, Nina; Birgander, Richard; Johansson, Göran; Winsö, Ola; Koskinen, Lars-Owe; Haney, Michael

    2017-05-12

    Post-craniotomy intracranial air can be present in patients scheduled for air ambulance transport to their home hospital. We aimed to assess risk for in-flight intracranial pressure (ICP) increases related to observed intracranial air volumes, hypothetical sea level pre-transport ICP, and different potential flight levels and cabin pressures. A cohort of consecutive subdural hematoma evacuation patients from one University Medical Centre was assessed with post-operative intracranial air volume measurements by computed tomography. Intracranial pressure changes related to estimated intracranial air volume effects of changing atmospheric pressure (simulating flight and cabin pressure changes up to 8000 ft) were simulated using an established model for intracranial pressure and volume relations. Approximately one third of the cohort had post-operative intracranial air. Of these, approximately one third had intracranial air volumes less than 11 ml. The simulation estimated that the expected changes in intracranial pressure during 'flight' would not result in intracranial hypertension. For intracranial air volumes above 11 ml, the simulation suggested that it was possible that intracranial hypertension could develop 'inflight' related to cabin pressure drop. Depending on the pre-flight intracranial pressure and air volume, this could occur quite early during the assent phase in the flight profile. DISCUSSION: These findings support the idea that there should be radiographic verification of the presence or absence of intracranial air after craniotomy for patients planned for long distance air transport. Very small amounts of air are clinically inconsequential. Otherwise, air transport with maintained ground-level cabin pressure should be a priority for these patients.

  19. Rapid and quantitative chemical exchange saturation transfer (CEST) imaging with magnetic resonance fingerprinting (MRF).

    PubMed

    Cohen, Ouri; Huang, Shuning; McMahon, Michael T; Rosen, Matthew S; Farrar, Christian T

    2018-05-13

    To develop a fast magnetic resonance fingerprinting (MRF) method for quantitative chemical exchange saturation transfer (CEST) imaging. We implemented a CEST-MRF method to quantify the chemical exchange rate and volume fraction of the N α -amine protons of L-arginine (L-Arg) phantoms and the amide and semi-solid exchangeable protons of in vivo rat brain tissue. L-Arg phantoms were made with different concentrations (25-100 mM) and pH (pH 4-6). The MRF acquisition schedule varied the saturation power randomly for 30 iterations (phantom: 0-6 μT; in vivo: 0-4 μT) with a total acquisition time of ≤2 min. The signal trajectories were pattern-matched to a large dictionary of signal trajectories simulated using the Bloch-McConnell equations for different combinations of exchange rate, exchangeable proton volume fraction, and water T 1 and T 2 relaxation times. The chemical exchange rates of the N α -amine protons of L-Arg were significantly (P < 0.0001) correlated with the rates measured with the quantitation of exchange using saturation power method. Similarly, the L-Arg concentrations determined using MRF were significantly (P < 0.0001) correlated with the known concentrations. The pH dependence of the exchange rate was well fit (R 2  = 0.9186) by a base catalyzed exchange model. The amide proton exchange rate measured in rat brain cortex (34.8 ± 11.7 Hz) was in good agreement with that measured previously with the water exchange spectroscopy method (28.6 ± 7.4 Hz). The semi-solid proton volume fraction was elevated in white (12.2 ± 1.7%) compared to gray (8.1 ± 1.1%) matter brain regions in agreement with previous magnetization transfer studies. CEST-MRF provides a method for fast, quantitative CEST imaging. © 2018 International Society for Magnetic Resonance in Medicine.

  20. Individual treatment of hotel and restaurant waste water in rural areas.

    PubMed

    Van Hulle, S W H; Ghyselbrecht, N; Vermeiren, T J L; Depuydt, V; Boeckaert, C

    2012-01-01

    About 25 hotels, restaurants and pubs in the rural community Heuvelland are situated in the area designated for individual water treatment. In order to meet the legislation by the end of 2015, each business needs to install an individual waste water treatment system (IWTS). To study this situation, three catering businesses were selected for further research. The aim of the study was to quantify the effluent quality and to assess IWTS performance for these catering businesses. First of all, the influence of discharging untreated waste water on the receiving surface water was examined. The results showed a decrease in water quality after the discharge point at every business. With the collected data, simulations with the software WEST were performed. With this software two types of IWTSs with different (buffer) volumes were modelled and tested for each catering business. The first type is a completely mixed activated sludge reactor and the second type is a submerged aerobic fixed-bed reactor. The results of these simulations demonstrate that purification with an IWTS is possible if the capacity is large enough and if an adequate buffer volume is installed and if regular maintenance is performed.

  1. Quasi-automatic 3D finite element model generation for individual single-rooted teeth and periodontal ligament.

    PubMed

    Clement, R; Schneider, J; Brambs, H-J; Wunderlich, A; Geiger, M; Sander, F G

    2004-02-01

    The paper demonstrates how to generate an individual 3D volume model of a human single-rooted tooth using an automatic workflow. It can be implemented into finite element simulation. In several computational steps, computed tomography data of patients are used to obtain the global coordinates of the tooth's surface. First, the large number of geometric data is processed with several self-developed algorithms for a significant reduction. The most important task is to keep geometrical information of the real tooth. The second main part includes the creation of the volume model for tooth and periodontal ligament (PDL). This is realized with a continuous free form surface of the tooth based on the remaining points. Generating such irregular objects for numerical use in biomechanical research normally requires enormous manual effort and time. The finite element mesh of the tooth, consisting of hexahedral elements, is composed of different materials: dentin, PDL and surrounding alveolar bone. It is capable of simulating tooth movement in a finite element analysis and may give valuable information for a clinical approach without the restrictions of tetrahedral elements. The mesh generator of FE software ANSYS executed the mesh process for hexahedral elements successfully.

  2. Numerical Investigation of the Macroscopic Mechanical Behavior of NiTi-Hybrid Composites Subjected to Static Load-Unload-Reload Path

    NASA Astrophysics Data System (ADS)

    Taheri-Behrooz, Fathollah; Kiani, Ali

    2017-04-01

    Shape memory alloys (SMAs) are a type of shape memory materials that recover large deformation and return to their primary shape by rising temperature. In the current research, the effect of embedding SMA wires on the macroscopic mechanical behavior of glass-epoxy composites is investigated through finite element simulations. A perfect interface between SMA wires and the host composite is assumed. Effects of various parameters such as SMA wires volume fraction, SMA wires pre-strain and temperature are investigated during loading-unloading and reloading steps by employing ANSYS software. In order to quantify the extent of induced compressive stress in the host composite and residual tensile stress in the SMA wires, a theoretical approach is presented. Finally, it was shown that smart structures fabricated using composite layers and pre-strained SMA wires exhibited overall stiffness reduction at both ambient and elevated temperatures which were increased by adding SMA volume fraction. Also, the induced compressive stress on the host composite was increased remarkably using 4% pre-strained SMA wires at elevated temperature. Results obtained by FE simulations were in good correlation with the rule of mixture predictions and available experimental data in the literature.

  3. Synthetic calibration of a Rainfall-Runoff Model

    USGS Publications Warehouse

    Thompson, David B.; Westphal, Jerome A.; ,

    1990-01-01

    A method for synthetically calibrating storm-mode parameters for the U.S. Geological Survey's Precipitation-Runoff Modeling System is described. Synthetic calibration is accomplished by adjusting storm-mode parameters to minimize deviations between the pseudo-probability disributions represented by regional regression equations and actual frequency distributions fitted to model-generated peak discharge and runoff volume. Results of modeling storm hydrographs using synthetic and analytic storm-mode parameters are presented. Comparisons are made between model results from both parameter sets and between model results and observed hydrographs. Although mean storm runoff is reproducible to within about 26 percent of the observed mean storm runoff for five or six parameter sets, runoff from individual storms is subject to large disparities. Predicted storm runoff volume ranged from 2 percent to 217 percent of commensurate observed values. Furthermore, simulation of peak discharges was poor. Predicted peak discharges from individual storm events ranged from 2 percent to 229 percent of commensurate observed values. The model was incapable of satisfactorily executing storm-mode simulations for the study watersheds. This result is not considered a particular fault of the model, but instead is indicative of deficiencies in similar conceptual models.

  4. Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging

    NASA Astrophysics Data System (ADS)

    Watson, C. C.; Newport, D.; Casey, M. E.; deKemp, R. A.; Beanlands, R. S.; Schmand, M.

    1997-02-01

    Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [/sup 18/F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. Here, they describe this algorithm and present scatter correction results from human and chest phantom studies.

  5. On the zero-crossing of the three-gluon Green's function from lattice simulations

    NASA Astrophysics Data System (ADS)

    Athenodorou, Andreas; Boucaud, Philippe; de Soto, Feliciano; Rodríguez-Quintero, José; Zafeiropoulos, Savvas

    2018-03-01

    We report on some efforts recently made in order to gain a better understanding of some IR properties of the 3-point gluon Green's function by exploiting results from large-volume quenched lattice simulations. These lattice results have been obtained by using both tree-level Symanzik and the standard Wilson action, in the aim of assessing the possible impact of effects presumably resulting from a particular choice for the discretization of the action. The main resulting feature is the existence of a negative log-aritmic divergence at zero-momentum, which pulls the 3-gluon form factors down at low momenta and, consequently, yields a zero-crossing at a given deep IR momentum. The results can be correctly explained by analyzing the relevant Dyson-Schwinger equations and appropriate truncation schemes.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Mark J.; Saleh, Omar A.

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  7. Exploring Model Assumptions Through Three Dimensional Mixing Simulations Using a High-order Hydro Option in the Ares Code

    NASA Astrophysics Data System (ADS)

    White, Justin; Olson, Britton; Morgan, Brandon; McFarland, Jacob; Lawrence Livermore National Laboratory Team; University of Missouri-Columbia Team

    2015-11-01

    This work presents results from a large eddy simulation of a high Reynolds number Rayleigh-Taylor instability and Richtmyer-Meshkov instability. A tenth-order compact differencing scheme on a fixed Eulerian mesh is utilized within the Ares code developed at Lawrence Livermore National Laboratory. (LLNL) We explore the self-similar limit of the mixing layer growth in order to evaluate the k-L-a Reynolds Averaged Navier Stokes (RANS) model (Morgan and Wickett, Phys. Rev. E, 2015). Furthermore, profiles of turbulent kinetic energy, turbulent length scale, mass flux velocity, and density-specific-volume correlation are extracted in order to aid the creation a high fidelity LES data set for RANS modeling. Prepared by LLNL under Contract DE-AC52-07NA27344.

  8. Relative contributions of set-asides and tree retention to the long-term availability of key forest biodiversity structures at the landscape scale.

    PubMed

    Roberge, Jean-Michel; Lämås, Tomas; Lundmark, Tomas; Ranius, Thomas; Felton, Adam; Nordin, Annika

    2015-05-01

    Over previous decades new environmental measures have been implemented in forestry. In Fennoscandia, forest management practices were modified to set aside conservation areas and to retain trees at final felling. In this study we simulated the long-term effects of set-aside establishment and tree retention practices on the future availability of large trees and dead wood, two forest structures of documented importance to biodiversity conservation. Using a forest decision support system (Heureka), we projected the amounts of these structures over 200 years in two managed north Swedish landscapes, under management scenarios with and without set-asides and tree retention. In line with common best practice, we simulated set-asides covering 5% of the productive area with priority to older stands, as well as ∼5% green-tree retention (solitary trees and forest patches) including high-stump creation at final felling. We found that only tree retention contributed to substantial increases in the future density of large (DBH ≥35 cm) deciduous trees, while both measures made significant contributions to the availability of large conifers. It took more than half a century to observe stronger increases in the densities of large deciduous trees as an effect of tree retention. The mean landscape-scale volumes of hard dead wood fluctuated widely, but the conservation measures yielded values which were, on average over the entire simulation period, about 2.5 times as high as for scenarios without these measures. While the density of large conifers increased with time in the landscape initially dominated by younger forest, best practice conservation measures did not avert a long-term decrease in large conifer density in the landscape initially comprised of more old forest. Our results highlight the needs to adopt a long temporal perspective and to consider initial landscape conditions when evaluating the large-scale effects of conservation measures on forest biodiversity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Railroads and the Environment : Estimation of Fuel Consumption in Rail Transportation : Volume 3. Comparison of Computer Simulations with Field Measurements

    DOT National Transportation Integrated Search

    1978-09-01

    This report documents comparisons between extensive rail freight service measurements (previously presented in Volume II) and simulations of the same operations using a sophisticated train performance calculator computer program. The comparisons cove...

  10. Distribution and radiative forcing of Asian dust and anthropogenic aerosols from East Asia simulated by SPRINTARS

    NASA Astrophysics Data System (ADS)

    Takemura, T.; Nakajima, T.; Uno, I.

    2002-12-01

    A three-dimensional aerosol transport-radiation model, SPRINTARS (Spectral Radiation-Transport Model for Aerosol Species), has been developed based on an atmospheric general circulation model of the Center for Climate System Research, University of Tokyo/National Institute for Environmental Studies, Japan to research the effects of aerosols on the climate system and atmospheric environment. SPRINTARS successfully simulates the long-range transport of the large-scale Asian dust storms from East Asia to North America by crossing the North Pacific Ocean in springtime 2001 and 2002. It is found from the calculated dust optical thickness that 10 to 20% of Asian dust around Japan reached North America. The simulation also reveals the importance of anthropogenic aerosols, which are carbonaceous and sulfate aerosols emitted from the industrialized areas in the East Asian continent, to air turbidity during the large-scale Asian dust storms. The simulated results are compared with a volume of observation data regarding the aerosol characteristics over East Asia in the spring of 2001 acquired by the intensive observation campaigns of ACE-Asia (Asian Pacific Regional Aerosol Characterization Experiment) and APEX (Asian Atmospheric Particulate Environmental Change Studies). The comparisons are carried out not only for aerosol concentrations but also for aerosol optical properties, such as optical thickness, Angstrom exponent which is a size index calculated by the log-slope exponent of the optical thickness between two wavelengths, and single scattering albedo. The consistence of Angstrom exponent between the simulation and observations means the reasonable simulation of the ratio of anthropogenic aerosols to Asian dust, which supports the suggestion by the simulation on the importance of anthropogenic aerosols to air turbidity during the large-scale Asian dust storms. SPRINTARS simultaneously calculates the aerosol direct and indirect radiative forcings. The direct radiative forcing of Asian dust at the tropopause is negative over ocean, on the other hand, positive over deserts, snow, and sea ice in the clear-sky condition. The simulation also shows that it depends not only on aerosol mass concentrations but also on the vertical profiles of aerosols and cloud water.

  11. Isolating Added Mass Load Components of CPAS Main Clusters

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2017-01-01

    The current simulation for the Capsule Parachute Assembly System (CPAS) lacks fidelity in representing added mass for the 116 ft Do ringsail Main parachute. The availability of 3-D models of inflating Main canopies allowed for better estimation the enclosed air volume as a function of time. This was combined with trajectory state information to estimate the components making up measured axial loads. A proof-of-concept for an alternate simulation algorithm was developed based on enclosed volume as the primary independent variable rather than drag area growth. Databases of volume growth and parachute drag area vs. volume were developed for several flight tests. Other state information was read directly from test data, rather than numerically propagated. The resulting simulated peak loads were close in timing and magnitude to the measured loads data. However, results are very sensitive to data curve fitting and may not be suitable for Monte Carlo simulations. It was assumed that apparent mass was either negligible or a small fraction of enclosed mass, with little difference in results.

  12. Perturbative expansions from Monte Carlo simulations at weak coupling: Wilson loops and the static-quark self-energy

    NASA Astrophysics Data System (ADS)

    Trottier, H. D.; Shakespeare, N. H.; Lepage, G. P.; MacKenzie, P. B.

    2002-05-01

    Perturbative coefficients for Wilson loops and the static-quark self-energy are extracted from Monte Carlo simulations at weak coupling. The lattice volumes and couplings are chosen to ensure that the lattice momenta are all perturbative. Twisted boundary conditions are used to eliminate the effects of lattice zero modes and to suppress nonperturbative finite-volume effects due to Z(3) phases. Simulations of the Wilson gluon action are done with both periodic and twisted boundary conditions, and over a wide range of lattice volumes (from 34 to 164) and couplings (from β~9 to β~60). A high precision comparison is made between the simulation data and results from finite-volume lattice perturbation theory. The Monte Carlo results are shown to be in excellent agreement with perturbation theory through second order. New results for third-order coefficients for a number of Wilson loops and the static-quark self-energy are reported.

  13. Computer Simulation of Spatial Arrangement and Connectivity of Particles in Three-Dimensional Microstructure: Application to Model Electrical Conductivity of Polymer Matrix Composite

    NASA Technical Reports Server (NTRS)

    Louis, P.; Gokhale, A. M.

    1996-01-01

    Computer simulation is a powerful tool for analyzing the geometry of three-dimensional microstructure. A computer simulation model is developed to represent the three-dimensional microstructure of a two-phase particulate composite where particles may be in contact with one another but do not overlap significantly. The model is used to quantify the "connectedness" of the particulate phase of a polymer matrix composite containing hollow carbon particles in a dielectric polymer resin matrix. The simulations are utilized to estimate the morphological percolation volume fraction for electrical conduction, and the effective volume fraction of the particles that actually take part in the electrical conduction. The calculated values of the effective volume fraction are used as an input for a self-consistent physical model for electrical conductivity. The predicted values of electrical conductivity are in very good agreement with the corresponding experimental data on a series of specimens having different particulate volume fraction.

  14. Integrating surrogate models into subsurface simulation framework allows computation of complex reactive transport scenarios

    NASA Astrophysics Data System (ADS)

    De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael

    2017-04-01

    Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M.: "Flexible Simulation Framework to Couple Processes in Complex 3D Models for Subsurface Utilization Assessment.", Energy Procedia, 97, 2016 p. 494-501.

  15. Response of a comprehensive climate model to a broad range of external forcings: relevance for deep ocean ventilation and the development of late Cenozoic ice ages

    NASA Astrophysics Data System (ADS)

    Galbraith, Eric; de Lavergne, Casimir

    2018-03-01

    Over the past few million years, the Earth descended from the relatively warm and stable climate of the Pliocene into the increasingly dramatic ice age cycles of the Pleistocene. The influences of orbital forcing and atmospheric CO2 on land-based ice sheets have long been considered as the key drivers of the ice ages, but less attention has been paid to their direct influences on the circulation of the deep ocean. Here we provide a broad view on the influences of CO2, orbital forcing and ice sheet size according to a comprehensive Earth system model, by integrating the model to equilibrium under 40 different combinations of the three external forcings. We find that the volume contribution of Antarctic (AABW) vs. North Atlantic (NADW) waters to the deep ocean varies widely among the simulations, and can be predicted from the difference between the surface densities at AABW and NADW deep water formation sites. Minima of both the AABW-NADW density difference and the AABW volume occur near interglacial CO2 (270-400 ppm). At low CO2, abundant formation and northward export of sea ice in the Southern Ocean contributes to very salty and dense Antarctic waters that dominate the global deep ocean. Furthermore, when the Earth is cold, low obliquity (i.e. a reduced tilt of Earth's rotational axis) enhances the Antarctic water volume by expanding sea ice further. At high CO2, AABW dominance is favoured due to relatively warm subpolar North Atlantic waters, with more dependence on precession. Meanwhile, a large Laurentide ice sheet steers atmospheric circulation as to strengthen the Atlantic Meridional Overturning Circulation, but cools the Southern Ocean remotely, enhancing Antarctic sea ice export and leading to very salty and expanded AABW. Together, these results suggest that a `sweet spot' of low CO2, low obliquity and relatively small ice sheets would have poised the AMOC for interruption, promoting Dansgaard-Oeschger-type abrupt change. The deep ocean temperature and salinity simulated under the most representative `glacial' state agree very well with reconstructions from the Last Glacial Maximum (LGM), which lends confidence in the ability of the model to estimate large-scale changes in water-mass geometry. The model also simulates a circulation-driven increase of preformed radiocarbon reservoir age, which could explain most of the reconstructed LGM-preindustrial ocean radiocarbon change. However, the radiocarbon content of the simulated glacial ocean is still higher than reconstructed for the LGM, and the model does not reproduce reconstructed LGM deep ocean oxygen depletions. These ventilation-related disagreements probably reflect unresolved physical aspects of ventilation and ecosystem processes, but also raise the possibility that the LGM ocean circulation was not in equilibrium. Finally, the simulations display an increased sensitivity of both surface air temperature and AABW volume to orbital forcing under low CO2. We suggest that this enhanced orbital sensitivity contributed to the development of the ice age cycles by amplifying the responses of climate and the carbon cycle to orbital forcing, following a gradual downward trend of CO2.

  16. Two-way coupled SPH and particle level set fluid simulation.

    PubMed

    Losasso, Frank; Talton, Jerry; Kwatra, Nipun; Fedkiw, Ronald

    2008-01-01

    Grid-based methods have difficulty resolving features on or below the scale of the underlying grid. Although adaptive methods (e.g. RLE, octrees) can alleviate this to some degree, separate techniques are still required for simulating small-scale phenomena such as spray and foam, especially since these more diffuse materials typically behave quite differently than their denser counterparts. In this paper, we propose a two-way coupled simulation framework that uses the particle level set method to efficiently model dense liquid volumes and a smoothed particle hydrodynamics (SPH) method to simulate diffuse regions such as sprays. Our novel SPH method allows us to simulate both dense and diffuse water volumes, fully incorporates the particles that are automatically generated by the particle level set method in under-resolved regions, and allows for two way mixing between dense SPH volumes and grid-based liquid representations.

  17. A 2D modeling approach for fluid propagation during FE-forming simulation of continuously reinforced composites in wet compression moulding

    NASA Astrophysics Data System (ADS)

    Poppe, Christian; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Wet compression moulding (WCM) provides large-scale production potential for continuously fiber reinforced components as a promising alternative to resin transfer moulding (RTM). Lower cycle times are possible due to parallelization of the process steps draping, infiltration and curing during moulding (viscous draping). Experimental and theoretical investigations indicate a strong mutual dependency between the physical mechanisms, which occur during draping and mould filling (fluid-structure-interaction). Thus, key process parameters, like fiber orientation, fiber volume fraction, cavity pressure and the amount and viscosity of the resin are physically coupled. To enable time and cost efficient product and process development throughout all design stages, accurate process simulation tools are desirable. Separated draping and mould filling simulation models, as appropriate for the sequential RTM-process, cannot be applied for the WCM process due to the above outlined physical couplings. Within this study, a two-dimensional Darcy-Propagation-Element (DPE-2D) based on a finite element formulation with additional control volumes (FE/CV) is presented, verified and applied to forming simulation of a generic geometry, as a first step towards a fluid-structure-interaction model taking into account simultaneous resin infiltration and draping. The model is implemented in the commercial FE-Solver Abaqus by means of several user subroutines considering simultaneous draping and 2D-infiltration mechanisms. Darcy's equation is solved with respect to a local fiber orientation. Furthermore, the material model can access the local fluid domain properties to update the mechanical forming material parameter, which enables further investigations on the coupled physical mechanisms.

  18. Health effects associated with energy conservation measures in commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenner, R.D.; Baechler, M.C.

    Indoor air quality can be impacted by hundreds of different chemicals. More than 900 different organic compounds alone have been identified in indoor air. Health effects that could arise from exposure to individual pollutants or mixtures of pollutants cover the full range of acute and chronic effects, including largely reversible responses, such as rashes and irritations, to the irreversible toxic and carcinogenic effects. These indoor contaminants are emitted from a large variety of materials and substances that are widespread components of everyday life. Pacific Northwest Laboratory conducted a search of the peer-reviewed literature on health effects associated with indoor airmore » contaminants for the Bonneville Power Administration to aid the agency in the preparation of environmental documents. Results are reported in two volumes. Volume 1 summarizes the results of the search of the peer-reviewed literature on health effects associated with a selected list of indoor air contaminants. In addition, the report discusses potential health effects of polychlorinated biphenyls and chlorofluorocarbons. All references to the literature reviewed are found in this document Volume 2. Volume 2 provides detailed information from the literature reviewed, summarizes potential health effects, reports health hazard ratings, and discusses quantitative estimates of carcinogenic risk in humans and animals. Contaminants discussed in this report are those that; have been measured in the indoor air of a public building; have been measured (significant concentrations) in test situations simulating indoor air quality (as presented in the referenced literature); and have a significant hazard rating. 38 refs., 7 figs., 23 tabs.« less

  19. Under conditions of large geometric miss, tumor control probability can be higher for static gantry intensity-modulated radiation therapy compared to volume-modulated arc therapy for prostate cancer.

    PubMed

    Balderson, Michael; Brown, Derek; Johnson, Patricia; Kirkby, Charles

    2016-01-01

    The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic-based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for the different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  20. Scalar excursions in large-eddy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matheou, Georgios; Dimotakis, Paul E.

    Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less

Top