Science.gov

Sample records for large volume simulations

  1. Simulating cosmic reionization: how large a volume is large enough?

    NASA Astrophysics Data System (ADS)

    Iliev, Ilian T.; Mellema, Garrelt; Ahn, Kyungjin; Shapiro, Paul R.; Mao, Yi; Pen, Ue-Li

    2014-03-01

    We present the largest-volume (425 Mpc h-1 = 607 Mpc on a side) full radiative transfer simulation of cosmic reionization to date. We show that there is significant additional power in density fluctuations at very large scales. We systematically investigate the effects this additional power has on the progress, duration and features of reionization and on selected reionization observables. We find that comoving volume of ˜100 Mpc h-1 per side is sufficient for deriving a convergent mean reionization history, but that the reionization patchiness is significantly underestimated. We use jackknife splitting to quantify the convergence of reionization properties with simulation volume. We find that sub-volumes of ˜100 Mpc h-1 per side or larger yield convergent reionization histories, except for the earliest times, but smaller volumes of ˜50 Mpc h-1 or less are not well converged at any redshift. Reionization history milestones show significant scatter between the sub-volumes, as high as Δz ˜ 1 for ˜50 Mpc h-1 volumes. If we only consider mean-density sub-regions the scatter decreases, but remains at Δz ˜ 0.1-0.2 for the different size sub-volumes. Consequently, many potential reionization observables like 21-cm rms, 21-cm PDF skewness and kurtosis all show good convergence for volumes of ˜200 Mpc h-1, but retain considerable scatter for smaller volumes. In contrast, the three-dimensional 21-cm power spectra at large scales (k < 0.25 h Mpc-1) do not fully converge for any sub-volume size. These additional large-scale fluctuations significantly enhance the 21-cm fluctuations, which should improve the prospects of detection considerably, given the lower foregrounds and greater interferometer sensitivity at higher frequencies.

  2. Determination of the large scale volume weighted halo velocity bias in simulations

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-06-01

    A profound assumption in peculiar velocity cosmology is bv=1 at sufficiently large scales, where bv is the volume-weighted halo(galaxy) velocity bias with respect to the matter velocity field. However, this fundamental assumption has not been robustly verified in numerical simulations. Furthermore, it is challenged by structure formation theory (Bardeen, Bond, Kaiser and Szalay, Astrophys. J. 304, 15 (1986); Desjacques and Sheth, Phys. Rev D 81, 023526 (2010), which predicts the existence of velocity bias (at least for proto-halos) due to the fact that halos reside in special regions (local density peaks). The major obstacle to measuring the volume-weighted velocity from N-body simulations is an unphysical sampling artifact. It is entangled in the measured velocity statistics and becomes significant for sparse populations. With recently improved understanding of the sampling artifact (Zhang, Zheng and Jing, 2015, PRD; Zheng, Zhang and Jing, 2015, PRD), for the first time we are able to appropriately correct this sampling artifact and then robustly measure the volume-weighted halo velocity bias. (1) We verify bv=1 within 2% model uncertainty at k ≲0.1 h /Mpc and z =0 - 2 for halos of mass ˜1012- 1013h-1M⊙ and, therefore, consolidate a foundation for the peculiar velocity cosmology. (2) We also find statistically significant signs of bv≠1 at k ≳0.1 h /Mpc . Unfortunately, whether this is real or caused by a residual sampling artifact requires further investigation. Nevertheless, cosmology based on the k ≳0.1 h /Mpc velocity data should be careful with this potential velocity bias.

  3. Measurements of Elastic and Inelastic Properties under Simulated Earth's Mantle Conditions in Large Volume Apparatus

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.

    2012-12-01

    The interpretation of highly resolved seismic data from Earths deep interior require measurements of the physical properties of Earth's materials under experimental simulated mantle conditions. More than decade ago seismic tomography clearly showed subduction of crustal material can reach the core mantle boundary under specific circumstances. That means there is no longer space for the assumption deep mantle rocks might be much less complex than deep crustal rocks known from exhumation processes. Considering this geophysical high pressure research is faced the challenge to increase pressure and sample volume at the same time to be able to perform in situ experiments with representative complex samples. High performance multi anvil devices using novel materials are the most promising technique for this exciting task. Recent large volume presses provide sample volumes 3 to 7 orders of magnitude bigger than in diamond anvil cells far beyond transition zone conditions. The sample size of several cubic millimeters allows elastic wave frequencies in the low to medium MHz range. Together with the small and even adjustable temperature gradients over the whole sample this technique makes anisotropy and grain boundary effects in complex systems accessible for elastic and inelastic properties measurements in principle. The measurements of both elastic wave velocities have also no limits for opaque and encapsulated samples. The application of triple-mode transducers and the data transfer function technique for the ultrasonic interferometry reduces the time for saving the data during the experiment to about a minute or less. That makes real transient measurements under non-equilibrium conditions possible. A further benefit is, both elastic wave velocities are measured exactly simultaneously. Ultrasonic interferometry necessarily requires in situ sample deformation measurement by X-radiography. Time-resolved X-radiography makes in situ falling sphere viscosimetry and even the

  4. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L. |; Rickert, M. |

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  5. Applied large eddy simulation.

    PubMed

    Tucker, Paul G; Lardeau, Sylvain

    2009-07-28

    Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity.

  6. A new development of the dynamic procedure in large-eddy simulation based on a Finite Volume integral approach. Application to stratified turbulence

    NASA Astrophysics Data System (ADS)

    Denaro, Filippo Maria; de Stefano, Giuliano

    2011-10-01

    A Finite Volume-based large-eddy simulation method is proposed along with a suitable extension of the dynamic modelling procedure that takes into account for the integral formulation of the governing filtered equations. Discussion about the misleading interpretation of FV in some literature is addressed. Then, the classical Germano identity is congruently rewritten in such a way that the determination of the modelling parameters does not require any arbitrary averaging procedure and thus retains a fully local character. The numerical modelling of stratified turbulence is the specific problem considered in this study, as an archetypal of simple geophysical flows. The original scaling formulation of the dynamic sub-grid scale model proposed by Wong and Lilly (Phys. Fluids 6(6), 1994) is suitably extended to the present integral formulation. This approach is preferred with respect to traditional ones since the eddy coefficients can be independently computed by avoiding the addition of unjustified buoyancy production terms in the constitutive equations. Simple scaling arguments allow us not to use the equilibrium hypothesis according to which the dissipation rate should equal the sub-grid scale energy production. A careful a priori analysis of the relevance of the test filter shape as well as the filter-to-grid ratio is reported. Large-eddy simulation results are a posteriori compared with a reference pseudo-spectral direct numerical solution that is suitably post-filtered in order to have a meaningful comparison. In particular, the spectral distribution of kinetic and thermal energy as well as the viscosity and diffusivity sub-grid scale profiles are illustrated. The good performances of the proposed method, in terms of both evolutions of global quantities and statistics, are very promising for the future development and application of the method.

  7. Large Eddy Simulation of Bubbly Flow and Slag Layer Behavior in Ladle with Discrete Phase Model (DPM)-Volume of Fluid (VOF) Coupled Model

    NASA Astrophysics Data System (ADS)

    Li, Linmin; Liu, Zhongqiu; Cao, Maoxue; Li, Baokuan

    2015-07-01

    In the ladle metallurgy process, the bubble movement and slag layer behavior is very important to the refining process and steel quality. For the bubble-liquid flow, bubble movement plays a significant role in the phase structure and causes the unsteady complex turbulent flow pattern. This is one of the most crucial shortcomings of the current two-fluid models. In the current work, a one-third scale water model is established to investigate the bubble movement and the slag open-eye formation. A new mathematical model using the large eddy simulation (LES) is developed for the bubble-liquid-slag-air four-phase flow in the ladle. The Eulerian volume of fluid (VOF) model is used for tracking the liquid-slag-air free surfaces and the Lagrangian discrete phase model (DPM) is used for describing the bubble movement. The turbulent liquid flow is induced by bubble-liquid interactions and is solved by LES. The procedure of bubble coming out of the liquid and getting into the air is modeled using a user-defined function. The results show that the present LES-DPM-VOF coupled model is good at predicting the unsteady bubble movement, slag eye formation, interface fluctuation, and slag entrainment.

  8. Large volume manufacture of dymalloy

    SciTech Connect

    1998-06-22

    The purpose of this research was to test the commercial viability and feasibility of Dymalloy, a composite material to measure thermal conductivity. Dymalloy was developed as part of a CRADA with Sun Microsystems. Sun Microsystems was a potential end user of Dymalloy as a substrate for MCMS. Sun had no desire to be involved in the manufacture of this material. The goal of this small business CRADA with Spectra Mat was to establish the high volume commercial manufacturing industry source for Dymalloy required by an end-user such as Sun Microsystems. The difference between the fabrication technique developed during the CRADA and this proposed work related to the mechanical technique of coating the diamond powder. Mechanical parts for the high-volume diamond powder coating process existed; however, they needed to be installed in an existing coating system for evaluation. Sputtering systems similar to the one required for this project were available at LLNL. Once the diamond powder was coated, both LLNL and Spectra Mat could make and test the Dymalloy composites. Spectra Mat manufactured Dymalloy composites in order to evaluate and establish a reasonable cost estimate on their existing processing capabilities. This information was used by Spectra Mat to define the market and cost-competitive products that could be commercialized from this new substrate material.

  9. LARGE BUILDING HVAC SIMULATION

    EPA Science Inventory

    The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...

  10. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  11. Large volume flow-through scintillating detector

    DOEpatents

    Gritzo, Russ E.; Fowler, Malcolm M.

    1995-01-01

    A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

  12. Mesoscale Ocean Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank

    2015-11-01

    The highest resolution global climate models (GCMs) can now resolve the largest scales of mesoscale dynamics in the ocean. This has the potential to increase the fidelity of GCMs. However, the effects of the smallest, unresolved, scales of mesoscale dynamics must still be parametrized. One such family of parametrizations are mesoscale ocean large eddy simulations (MOLES), but the effects of including MOLES in a GCM are not well understood. In this presentation, several MOLES schemes are implemented in a mesoscale-resolving GCM (CESM), and the resulting flow is compared with that produced by more traditional sub-grid parametrizations. Large eddy simulation (LES) is used to simulate flows where the largest scales of turbulent motion are resolved, but the smallest scales are not resolved. LES has traditionally been used to study 3D turbulence, but recently it has also been applied to idealized 2D and quasi-geostrophic (QG) turbulence. The MOLES presented here are based on 2D and QG LES schemes.

  13. Safety considerations in large-volume lipoplasty.

    PubMed

    Giese, S Y

    2001-11-01

    Proper patient selection, diligent fluid management, and attention to body temperature are important safety considerations in large-volume lipoplasty (LVL). Complications related to fluid overload, lidocaine toxicity, coagulopathies, and lengthy combined surgical procedures are preventable and not directly linked to LVL technique. Benefits as well as morbidity and mortality from LVL can be weighed against risk factors such as obesity, a prediabetic condition, and/or adverse effects of weight-loss medications. The author describes how she incorporates safeguards into her LVL procedures. (Aesthetic Surg J 2001;21:545-548.).

  14. Large mode-volume, large beta, photonic crystal laser resonator

    SciTech Connect

    Dezfouli, Mohsen Kamandar; Dignam, Marc M.

    2014-12-15

    We propose an optical resonator formed from the coupling of 13, L2 defects in a triangular-lattice photonic crystal slab. Using a tight-binding formalism, we optimized the coupled-defect cavity design to obtain a resonator with predicted single-mode operation, a mode volume five times that of an L2-cavity mode and a beta factor of 0.39. The results are confirmed using finite-difference time domain simulations. This resonator is very promising for use as a single mode photonic crystal vertical-cavity surface-emitting laser with high saturation output power compared to a laser consisting of one of the single-defect cavities.

  15. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  16. LARGE volume string compactifications at finite temperature

    SciTech Connect

    Anguelova, Lilia; Calò, Vincenzo; Cicoli, Michele E-mail: v.calo@qmul.ac.uk

    2009-10-01

    We present a detailed study of the finite-temperature behaviour of the LARGE Volume type IIB flux compactifications. We show that certain moduli can thermalise at high temperatures. Despite that, their contribution to the finite-temperature effective potential is always negligible and the latter has a runaway behaviour. We compute the maximal temperature T{sub max}, above which the internal space decompactifies, as well as the temperature T{sub *}, that is reached after the decay of the heaviest moduli. The natural constraint T{sub *} < T{sub max} implies a lower bound on the allowed values of the internal volume V. We find that this restriction rules out a significant range of values corresponding to smaller volumes of the order V ∼ 10{sup 4}l{sub s}{sup 6}, which lead to standard GUT theories. Instead, the bound favours values of the order V ∼ 10{sup 15}l{sub s}{sup 6}, which lead to TeV scale SUSY desirable for solving the hierarchy problem. Moreover, our result favours low-energy inflationary scenarios with density perturbations generated by a field, which is not the inflaton. In such a scenario, one could achieve both inflation and TeV-scale SUSY, although gravity waves would not be observable. Finally, we pose a two-fold challenge for the solution of the cosmological moduli problem. First, we show that the heavy moduli decay before they can begin to dominate the energy density of the Universe. Hence they are not able to dilute any unwanted relics. And second, we argue that, in order to obtain thermal inflation in the closed string moduli sector, one needs to go beyond the present EFT description.

  17. SUSY's Ladder: reframing sequestering at Large Volume

    NASA Astrophysics Data System (ADS)

    Reece, Matthew; Xue, Wei

    2016-04-01

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague other supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. This gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.

  18. Large area pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Kruer, Mark A. (Inventor)

    1999-01-01

    An advanced solar simulator illuminates the surface a very large solar array, such as one twenty feet by twenty feet in area, from a distance of about twenty-six feet with an essentially uniform intensity field of pulsed light of an intensity of one AMO, enabling the solar array to be efficiently tested with light that emulates the sun. Light modifiers sculpt a portion of the light generated by an electrically powered high power Xenon lamp and together with direct light from the lamp provide uniform intensity illumination throughout the solar array, compensating for the square law and cosine law reduction in direct light intensity, particularly at the corner locations of the array. At any location within the array the sum of the direct light and reflected light is essentially constant.

  19. Large Eddy Simulations in Astrophysics

    NASA Astrophysics Data System (ADS)

    Schmidt, Wolfram

    2015-12-01

    In this review, the methodology of large eddy simulations (LES) is introduced and applications in astrophysics are discussed. As theoretical framework, the scale decomposition of the dynamical equations for neutral fluids by means of spatial filtering is explained. For cosmological applications, the filtered equations in comoving coordinates are also presented. To obtain a closed set of equations that can be evolved in LES, several subgrid-scale models for the interactions between numerically resolved and unresolved scales are discussed, in particular the subgrid-scale turbulence energy equation model. It is then shown how model coefficients can be calculated, either by dynamic procedures or, a priori, from high-resolution data. For astrophysical applications, adaptive mesh refinement is often indispensable. It is shown that the subgrid-scale turbulence energy model allows for a particularly elegant and physically well-motivated way of preserving momentum and energy conservation in adaptive mesh refinement (AMR) simulations. Moreover, the notion of shear-improved models for in-homogeneous and non-stationary turbulence is introduced. Finally, applications of LES to turbulent combustion in thermonuclear supernovae, star formation and feedback in galaxies, and cosmological structure formation are reviewed.

  20. Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces

    SciTech Connect

    Makarov, A. N.

    2015-09-15

    Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.

  1. Efficiency calibration and coincidence summing correction for a large volume (946cm(3)) LaBr3(Ce) detector: GEANT4 simulations and experimental measurements.

    PubMed

    Dhibar, M; Mankad, D; Mazumdar, I; Kumar, G Anil

    2016-12-01

    The paper describes the studies on efficiency calibration and coincidence summing correction for a 3.5″×6″ cylindrical LaBr3(Ce)detector. GEANT4 simulations were made with point sources, namely, (60)Co, (94)Nb, (24)Na, (46)Sc and (22)Na. The simulated efficiencies, extracted using (60)Co, (94)Nb, (24)Na and (46)Sc that emit coincident gamma rays with same decay intensities, were corrected for coincidence summing by applying the method proposed by Vidmar et al. (2003). The method was applied, for the first time, for correcting the simulated efficiencies extracted using (22)Na that emits coincident gamma rays with different decay intensities. The measured results obtained using (60)Co and (22)Na were found to be in good agreement with simulated results.

  2. Large-eddy simulation of propeller noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Mahesh, Krishnan

    2016-11-01

    We will discuss our ongoing work towards developing the capability to predict far field sound from the large-eddy simulation of propellers. A porous surface Ffowcs-Williams and Hawkings (FW-H) acoustic analogy, with a dynamic endcapping method (Nitzkorski and Mahesh, 2014) is developed for unstructured grids in a rotating frame of reference. The FW-H surface is generated automatically using Delaunay triangulation and is representative of the underlying volume mesh. The approach is validated for tonal trailing edge sound from a NACA 0012 airfoil. LES of flow around a propeller at design advance ratio is compared to experiment and good agreement is obtained. Results for the emitted far field sound will be discussed. This work is supported by ONR.

  3. Finite volume hydromechanical simulation in porous media

    PubMed Central

    Nordbotten, Jan Martin

    2014-01-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media. PMID:25574061

  4. Finite volume hydromechanical simulation in porous media.

    PubMed

    Nordbotten, Jan Martin

    2014-05-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media.

  5. Scalar excursions in large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Matheou, Georgios; Dimotakis, Paul E.

    2016-12-01

    The range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods for diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size

  6. Large-volume sampling and preconcentration for trace explosives detection.

    SciTech Connect

    Linker, Kevin Lane

    2004-05-01

    A trace explosives detection system typically contains three subsystems: sample collection, preconcentration, and detection. Sample collection of trace explosives (vapor and particulate) through large volumes of airflow helps reduce sampling time while increasing the amount of dilute sample collected. Preconcentration of the collected sample before introduction into the detector improves the sensitivity of the detector because of the increase in sample concentration. By combining large-volume sample collection and preconcentration, an improvement in the detection of explosives is possible. Large-volume sampling and preconcentration is presented using a systems level approach. In addition, the engineering of large-volume sampling and preconcentration for the trace detection of explosives is explained.

  7. Large space systems technology, 1980, volume 1

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    The technological and developmental efforts in support of the large space systems technology are described. Three major areas of interests are emphasized: (1) technology pertient to large antenna systems; (2) technology related to large space systems; and (3) activities that support both antenna and platform systems.

  8. The persistence of the large volumes in black holes

    NASA Astrophysics Data System (ADS)

    Ong, Yen Chin

    2015-08-01

    Classically, black holes admit maximal interior volumes that grow asymptotically linearly in time. We show that such volumes remain large when Hawking evaporation is taken into account. Even if a charged black hole approaches the extremal limit during this evolution, its volume continues to grow; although an exactly extremal black hole does not have a "large interior". We clarify this point and discuss the implications of our results to the information loss and firewall paradoxes.

  9. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  10. Technologies for imaging neural activity in large volumes

    PubMed Central

    Ji, Na; Freeman, Jeremy; Smith, Spencer L.

    2017-01-01

    Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194

  11. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  12. Sparticle spectra from Large-Volume String Compactifications

    SciTech Connect

    Conlon, Joseph P.

    2007-11-20

    Large-volume models are a promising approach to stabilising moduli and generating the weak hierarchy through TeV-supersymmetry. I describe the pattern of sparticle mass spectra that arises in these models.

  13. Coherent motility measurements of biological objects in a large volume

    NASA Astrophysics Data System (ADS)

    Ebersberger, J.; Weigelt, G.; Li, Yajun

    1986-05-01

    We have performed space-time intensity cross-correlation measurements of boiling image plane speckle interferograms to investigate the motility of a large number of small biological objects. Experiments were carried out with Artemia Salina species at various water temperatures. The advantage of this method is the fact that many objects in a large volume can be measured simultaneously.

  14. Large Interface Simulation in Multiphase Flow Phenomena

    SciTech Connect

    Henriques, Aparicio; Coste, Pierre; Pigny, Sylvain; Magnaudet, Jacques

    2006-07-01

    An attempt to represent multiphase multi-scale flow, filling the gap between Direct Numerical Simulation (DNS) and averaged approaches, is the purpose of this paper. We present a kind of Large Interface (LI) simulation formalism obtained after a filtering process on local instantaneous conservation equations of the two-fluid model which distinguishes between small scales and large scales contributions. LI surface tension force is also taken into account. Small scale dynamics call for modelization and large scale for simulation. Joined to this formalism, a criterion to recognize LI's is developed. It is used in an interface recognition algorithm which is qualified on a sloshing case and a bubble oscillation under zero-gravity. This method is applied to a rising bubble in a pool that collapses at a free surface and to a square-base basin experiment where splashing and sloshing at the free surface are the main break-up phenomena. (authors)

  15. Indian LSSC (Large Space Simulation Chamber) facility

    NASA Technical Reports Server (NTRS)

    Brar, A. S.; Prasadarao, V. S.; Gambhir, R. D.; Chandramouli, M.

    1988-01-01

    The Indian Space Agency has undertaken a major project to acquire in-house capability for thermal and vacuum testing of large satellites. This Large Space Simulation Chamber (LSSC) facility will be located in Bangalore and is to be operational in 1989. The facility is capable of providing 4 meter diameter solar simulation with provision to expand to 4.5 meter diameter at a later date. With such provisions as controlled variations of shroud temperatures and availability of infrared equipment as alternative sources of thermal radiation, this facility will be amongst the finest anywhere. The major design concept and major aspects of the LSSC facility are presented here.

  16. New Large Volume Press Beamlines at the Canadian Light Source

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.; Hormes, J.; Lauterjung, J.; Secco, R.; Hallin, E.

    2013-12-01

    The Canadian Light Source, the German Research Centre for Geosciences and the Western University recently agreed to establish two new large volume press beamlines at the Canadian Lightsource. As the first step a 250 tons DIA-LVP will be installed at the IDEAS beamline in 2014. The further development is associated with the construction of a superconducting wiggler beamline at the Brockhouse sector. A 1750 tons DIA LVP will be installed there about 2 years later. Up to the completion of this wiggler beamline the big press will be used for offline high pressure high temperature experiments under simulated Earth's mantle conditions. In addition to X-ray diffraction, all up-to-date high pressure techniques as ultrasonic interferometry, deformation analyses by X-radiography, X-ray densitometry, falling sphere viscosimetry, multi-staging etc. will be available at both beamlines. After the required commissioning the beamlines will be open to the worldwide user community from Geosciences, general material sciences, physics, chemistry, biology etc. based on the evaluation and ranking of the submitted user proposals by an international review panel.

  17. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  18. REXOR 2 rotorcraft simulation model. Volume 1: Engineering documentation

    NASA Technical Reports Server (NTRS)

    Reaser, J. S.; Kretsinger, P. H.

    1978-01-01

    A rotorcraft nonlinear simulation called REXOR II, divided into three volumes, is described. The first volume is a development of rotorcraft mechanics and aerodynamics. The second is a development and explanation of the computer code required to implement the equations of motion. The third volume is a user's manual, and contains a description of code input/output as well as operating instructions.

  19. Two-phase flows simulation in closed volume

    NASA Astrophysics Data System (ADS)

    Fedorov, A. V.; Lavruk, S. A.

    2016-10-01

    In this paper gas flow field was considered in the model volumes that correspond to real experimental ones. During simulation flow fields were defined in volumes, matching of the flow fields in different volumes and comparison of the velocity values along the plate that models fuel tank element was done.

  20. Large-Volume High-Pressure Mineral Physics in Japan

    NASA Astrophysics Data System (ADS)

    Liebermann, Robert C.; Prewitt, Charles T.; Weidner, Donald J.

    American high-pressure research with large sample volumes developed rapidly in the 1950s during the race to produce synthetic diamonds. At that time the piston cylinder, girdle (or belt), and tetrahedral anvil devices were invented. However, this development essentially stopped in the late 1950s, and while the diamond anvil cell has been used extensively in the United States with spectacular success for high-pressure experiments in small sample volumes, most of the significant technological advances in large-volume devices have taken place in Japan. Over the past 25 years, these technical advances have enabled a fourfold increase in pressure, with many important investigations of the chemical and physical properties of materials synthesized at high temperatures and pressures that cannot be duplicated with any apparatus currently available in the United States.

  1. Large eddy simulation in the ocean

    NASA Astrophysics Data System (ADS)

    Scotti, Alberto

    2010-12-01

    Large eddy simulation (LES) is a relative newcomer to oceanography. In this review, both applications of traditional LES to oceanic flows and new oceanic LES still in an early stage of development are discussed. The survey covers LES applied to boundary layer flows, traditionally an area where LES has provided considerable insight into the physics of the flow, as well as more innovative applications, where new SGS closure schemes need to be developed. The merging of LES with large-scale models is also briefly reviewed.

  2. Large Eddy Simulation of Turbulent Combustion

    DTIC Science & Technology

    2006-03-15

    Application to an HCCI Engine . Proceedings of the 4th Joint Meeting of the U.S. Sections of the Combustion Institute, 2005. [34] K. Fieweger...LARGE EDDY SIMULATION OF TURBULENT COMBUSTION Principle Investigator: Heinz Pitsch Flow Physics and Computation Department of Mechanical Engineering ...burners and engines found in modern, industrially relevant equipment. In the course of this transition of LES from a scientifically interesting method

  3. Large discharge-volume, silent discharge spark plug

    DOEpatents

    Kang, Michael

    1995-01-01

    A large discharge-volume spark plug for providing self-limiting microdischarges. The apparatus includes a generally spark plug-shaped arrangement of a pair of electrodes, where either of the two coaxial electrodes is substantially shielded by a dielectric barrier from a direct discharge from the other electrode, the unshielded electrode and the dielectric barrier forming an annular volume in which self-terminating microdischarges occur when alternating high voltage is applied to the center electrode. The large area over which the discharges occur, and the large number of possible discharges within the period of an engine cycle, make the present silent discharge plasma spark plug suitable for use as an ignition source for engines. In the situation, where a single discharge is effective in causing ignition of the combustible gases, a conventional single-polarity, single-pulse, spark plug voltage supply may be used.

  4. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  5. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  6. Large volume leukapheresis: Efficacy and safety of processing patient's total blood volume six times.

    PubMed

    Bojanic, Ines; Dubravcic, Klara; Batinic, Drago; Cepulic, Branka Golubic; Mazic, Sanja; Hren, Darko; Nemet, Damir; Labar, Boris

    2011-04-01

    Large-volume leukapheresis (LVL) differs from standard leukapheresis by increased blood flow and an altered anticoagulation regimen. An open issue is to what degree a further increase in processed blood volume is reasonable in terms of higher yields and safety. In 30 LVL performed in patients with hematologic malignancies, 6 total blood volumes were processed. LVL resulted in a higher CD34+ cell yield without a change in graft quality. Although a marked platelet decrease can be expected, LVL is safe and can be recommended as the standard procedure for patients who mobilize low numbers of CD34+ cells and when high number of CD34+ cells are required.

  7. Concentration of Enteroviruses from Large Volumes of Water

    PubMed Central

    Sobsey, Mark D.; Wallis, Craig; Henderson, Marilyn; Melnick, Joseph L.

    1973-01-01

    An improved method for concentrating viruses from large volumes of clean waters is described. It was found that, by acidification, viruses in large volumes of water could be efficiently adsorbed to epoxy-fiber-glass and nitrocellulose filters in the absence of exogenously added salts. Based upon this finding, a modified version of our previously described virus concentration system was developed for virus monitoring of clean waters. In this procedure the water being tested is acidified by injection of N HCl prior to passage through a virus adsorber consisting of a fiber-glass cartridge depth filter and an epoxy-fiber-glass membrane filter in series. The adsorbed viruses are then eluted with a 1-liter volume of pH 11.5 eluent and reconcentrated by adsorption to and elution from a small epoxy-fiber-glass filter series. With this method small quantities of poliovirus in 100-gallon (378.5-liter) volumes of tapwater were concentrated nearly 40,000-fold with an average virus recovery efficiency of 77%. Images PMID:16349972

  8. Numerical simulation of large fabric filter

    NASA Astrophysics Data System (ADS)

    Sedláček, Jan; Kovařík, Petr

    2012-04-01

    Fabric filters are used in the wide range of industrial technologies for cleaning of incoming or exhaust gases. To achieve maximal efficiency of the discrete phase separation and long lifetime of the filter hoses, it is necessary to ensure uniform load on filter surface and to avoid impacts of heavy particles with high velocities to the filter hoses. The paper deals with numerical simulation of two phase flow field in a large fabric filter. The filter is composed of six chambers with approx. 1600 filter hoses in total. The model was simplified to one half of the filter, the filter hoses walls were substituted by porous zones. The model settings were based on experimental data, especially on the filter pressure drop. Unsteady simulations with different turbulence models were done. Flow field together with particles trajectories were analyzed. The results were compared with experimental observations.

  9. Large Eddy Simulation of turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Moin, P.; Mansour, N. N.; Reynolds, W. C.; Ferziger, J. H.

    1979-01-01

    The conceptual foundation underlying Large Eddy Simulation (LES) is summarized, and the numerical methods developed for simulation of the time-developing turbulent mixing layer and turbulent plane Poiseuille flow are discussed. Computational results show that the average Reynolds stress profile nearly attains the equilibrium shape which balances the downstream mean pressure gradient in the regions away from the walls. In the vicinity of the walls, viscous stresses are shown to be significant; together with the Reynolds stresses, these stresses balance the mean pressure gradient. It is stressed that the subgrid scale contribution to the total Reynolds stress is significant only in the vicinity of the walls. The continued development of LES is urged.

  10. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05 simulations, in particular, allow us to also study the role and impact of the nuclear symmetry energy on these pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  11. Large eddy simulations in 2030 and beyond

    PubMed Central

    Piomelli, U

    2014-01-01

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier–Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise. PMID:25024415

  12. Large eddy simulations in 2030 and beyond.

    PubMed

    Piomelli, U

    2014-08-13

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier-Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise.

  13. Large volume multiple-path nuclear pumped laser

    NASA Technical Reports Server (NTRS)

    Hohl, F.; Deyoung, R. J. (Inventor)

    1981-01-01

    Large volumes of gas are excited by using internal high reflectance mirrors that are arranged so that the optical path crosses back and forth through the excited gaseous medium. By adjusting the external dielectric mirrors of the laser, the number of paths through the laser cavity can be varied. Output powers were obtained that are substantially higher than the output powers of previous nuclear laser systems.

  14. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  15. Developing large eddy simulation for turbomachinery applications.

    PubMed

    Eastwood, Simon J; Tucker, Paul G; Xia, Hao; Klostermeier, Christian

    2009-07-28

    For jets, large eddy resolving simulations are compared for a range of numerical schemes with no subgrid scale (SGS) model and for a range of SGS models with the same scheme. There is little variation in results for the different SGS models, and it is shown that, for schemes which tend towards having dissipative elements, the SGS model can be abandoned, giving what can be termed numerical large eddy simulation (NLES). More complex geometries are investigated, including coaxial and chevron nozzle jets. A near-wall Reynolds-averaged Navier-Stokes (RANS) model is used to cover over streak-like structures that cannot be resolved. Compressor and turbine flows are also successfully computed using a similar NLES-RANS strategy. Upstream of the compressor leading edge, the RANS layer is helpful in preventing premature separation. Capturing the correct flow over the turbine is particularly challenging, but nonetheless the RANS layer is helpful. In relation to the SGS model, for the flows considered, evidence suggests issues such as inflow conditions, problem definition and transition are more influential.

  16. The Large Area Pulsed Solar Simulator (LAPSS)

    NASA Technical Reports Server (NTRS)

    Mueller, R. L.

    1994-01-01

    The Large Area Pulsed Solar Simulator (LAPSS) has been installed at JPL. It is primarily intended to be used to illuminate and measure the electrical performance of photovoltaic devices. The simulator, originally manufactured by Spectrolab, Sylmar, CA, occupies an area measuring about 3 m wide x 12 m long. The data acquisition and data processing subsystems have been modernized. Tests on the LAPSS performance resulted in better than plus or minus 2 percent uniformity of irradiance at the test plane and better than plus or minus 0.3 percent measurement repeatability after warm-up. Glass absorption filters reduce the ultraviolet light emitted from the xenon flash lamps. This results in a close match to three different standard airmass zero and airmass 1.5 spectral irradiances. The 2-ms light pulse prevents heating of the device under test, resulting in more reliable temperature measurements. Overall, excellent electrical performance measurements have been made of many different types and sizes of photovoltaic devices. Since the original printing of this publication, in 1993, the LAPSS has been operational and new capabilities have been added. This revision includes a new section relating to the installation of a method to measure the I-V curve of a solar cell or array exhibiting a large effective capacitance. Another new section has been added relating to new capabilities for plotting single and multiple I-V curves, and for archiving the I-V data and test parameters. Finally, a section has been added regarding the data acquisition electronics calibration.

  17. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  18. EMBEDDING REALISTIC SURVEYS IN SIMULATIONS THROUGH VOLUME REMAPPING

    SciTech Connect

    Carlson, Jordan; White, Martin

    2010-10-15

    Connecting cosmological simulations to real-world observational programs is often complicated by a mismatch in geometry: while surveys often cover highly irregular cosmological volumes, simulations are customarily performed in a periodic cube. We describe a technique to remap this cube into elongated box-like shapes that are more useful for many applications. The remappings are one-to-one, volume-preserving, keep local structures intact, and involve minimal computational overhead.

  19. Acute leg volume changes in weightlessness and its simulation

    NASA Technical Reports Server (NTRS)

    Thornton, William E.; Uri, John J.; Hedge, Vickie; Coleman, Eugen; Moore, Thomas P.

    1992-01-01

    Leg volume changes were studied in six subjects during 150 min of horizontal, 6 deg headdown tilt and supine immersion. Results were compared to previously obtained space flight data. It is found that, at equivalent study times, the magnitude of the leg volume changes during the simulations was less than one half that seen during space flight. Relative and absolute losses from the upper leg were greater during space flight, while relative losses were greater from the lower leg during simulations.

  20. Large Eddy Simulation of Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Wu, Ting; Cotton, William R.

    1999-01-01

    The Regional Atmospheric Modeling System (RAMS) with mesoscale interactive nested-grids and a Large-Eddy Simulation (LES) version of RAMS, coupled to two-moment microphysics and a new two-stream radiative code were used to investigate the dynamic, microphysical, and radiative aspects of the November 26, 1991 cirrus event. Wu (1998) describes the results of that research in full detail and is enclosed as Appendix 1. The mesoscale nested grid simulation successfully reproduced the large scale circulation as compared to the Mesoscale Analysis and Prediction System's (MAPS) analyses and other observations. Three cloud bands which match nicely to the three cloud lines identified in an observational study (Mace et al., 1995) are predicted on Grid #2 of the nested grids, even though the mesoscale simulation predicts a larger west-east cloud width than what was observed. Large-eddy simulations (LES) were performed to study the dynamical, microphysical, and radiative processes in the 26 November 1991 FIRE 11 cirrus event. The LES model is based on the RAMS version 3b developed at Colorado State University. It includes a new radiation scheme developed by Harrington (1997) and a new subgrid scale model developed by Kosovic (1996). The LES model simulated a single cloud layer for Case 1 and a two-layer cloud structure for Case 2. The simulations demonstrated that latent heat release can play a significant role in the formation and development of cirrus clouds. For the thin cirrus in Case 1, the latent heat release was insufficient for the cirrus clouds to become positively buoyant. However, in some special cases such as Case 2, positively buoyant cells can be embedded within the cirrus layers. These cells were so active that the rising updraft induced its own pressure perturbations that affected the cloud evolution. Vertical profiles of the total radiative and latent heating rates indicated that for well developed, deep, and active cirrus clouds, radiative cooling and latent

  1. Large-eddy simulations with wall models

    NASA Technical Reports Server (NTRS)

    Cabot, W.

    1995-01-01

    The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.

  2. Improvement of surgical simulation using dynamic volume rendering.

    PubMed

    Radetzky, A; Schröcker, F; Auer, L M

    2000-01-01

    In the last years high efforts have been taken to develop surgical simulators for computer assisted training. However, most of these simulators use simple models of the human's anatomy, which are manually created using modeling software. Nevertheless, medical experts need to perform the training directly with the patient's complex anatomy, which can be received, for example, from digital imaging datasets (CT, MR). A common technique to display these datasets is volume rendering. However, even with high-end hardware only static models can be handled interactively. In surgical simulators a dynamic component is also needed because tissues must be deformed and partially removed. With the combination of springmass models, which are improved by neuro-fuzzy systems, and the recently developed OpenGL Volumizer, surgical simulation using real-time deformable (or dynamic) volume rendering became possible. As an application example the simulator ROBOSIM for minimally invasive neurosurgery is presented.

  3. Autonomic Closure for Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    King, Ryan; Hamlington, Peter; Dahm, Werner J. A.

    2015-11-01

    A new autonomic subgrid-scale closure has been developed for large eddy simulation (LES). The approach poses a supervised learning problem that captures nonlinear, nonlocal, and nonequilibrium turbulence effects without specifying a predefined turbulence model. By solving a regularized optimization problem on test filter scale quantities, the autonomic approach identifies a nonparametric function that represents the best local relation between subgrid stresses and resolved state variables. The optimized function is then applied at the grid scale to determine unknown LES subgrid stresses by invoking scale similarity in the inertial range. A priori tests of the autonomic approach on homogeneous isotropic turbulence show that the new approach is amenable to powerful optimization and machine learning methods and is successful for a wide range of filter scales in the inertial range. In these a priori tests, the autonomic closure substantially improves upon the dynamic Smagorinsky model in capturing the instantaneous, statistical, and energy transfer properties of the subgrid stress field.

  4. Large eddy simulation of cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, Aswin; Mahesh, Krishnan

    2014-11-01

    Large eddy simulation on unstructured grids is used to study hydrodynamic cavitation. The multiphase medium is represented using a homogeneous equilibrium model that assumes thermal equilibrium between the liquid and the vapor phase. Surface tension effects are ignored and the governing equations are the compressible Navier Stokes equations for the liquid/vapor mixture along with a transport equation for the vapor mass fraction. A characteristic-based filtering scheme is developed to handle shocks and material discontinuities in non-ideal gases and mixtures. A TVD filter is applied as a corrector step in a predictor-corrector approach with the predictor scheme being non-dissipative and symmetric. The method is validated for canonical one dimensional flows and leading edge cavitation over a hydrofoil, and applied to study sheet to cloud cavitation over a wedge. This work is supported by the Office of Naval Research.

  5. Large eddy simulation applications in gas turbines.

    PubMed

    Menzies, Kevin

    2009-07-28

    The gas turbine presents significant challenges to any computational fluid dynamics techniques. The combination of a wide range of flow phenomena with complex geometry is difficult to model in the context of Reynolds-averaged Navier-Stokes (RANS) solvers. We review the potential for large eddy simulation (LES) in modelling the flow in the different components of the gas turbine during a practical engineering design cycle. We show that while LES has demonstrated considerable promise for reliable prediction of many flows in the engine that are difficult for RANS it is not a panacea and considerable application challenges remain. However, for many flows, especially those dominated by shear layer mixing such as in combustion chambers and exhausts, LES has demonstrated a clear superiority over RANS for moderately complex geometries although at significantly higher cost which will remain an issue in making the calculations relevant within the design cycle.

  6. The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data

    NASA Technical Reports Server (NTRS)

    Hanke, C. R.; Nordwall, D. R.

    1970-01-01

    The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.

  7. Large eddy simulation of trailing edge noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Nitzkorski, Zane; Mahesh, Krishnan

    2015-11-01

    Noise generation is an important engineering constraint to many marine vehicles. A significant portion of the noise comes from propellers and rotors, specifically due to flow interactions at the trailing edge. Large eddy simulation is used to investigate the noise produced by a turbulent 45 degree beveled trailing edge and a NACA 0012 airfoil. A porous surface Ffowcs-Williams and Hawkings acoustic analogy is combined with a dynamic endcapping method to compute the sound. This methodology allows for the impact of incident flow noise versus the total noise to be assessed. LES results for the 45 degree beveled trailing edge are compared to experiment at M = 0 . 1 and Rec = 1 . 9 e 6 . The effect of boundary layer thickness on sound production is investigated by computing using both the experimental boundary layer thickness and a thinner boundary layer. Direct numerical simulation results of the NACA 0012 are compared to available data at M = 0 . 4 and Rec = 5 . 0 e 4 for both the hydrodynamic field and the acoustic field. Sound intensities and directivities are investigated and compared. Finally, some of the physical mechanisms of far-field noise generation, common to the two configurations, are discussed. Supported by Office of Naval research.

  8. Geometric Measures of Large Biomolecules: Surface, Volume and Pockets

    PubMed Central

    Mach, Paul; Koehl, Patrice

    2011-01-01

    Geometry plays a major role in our attempt to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent (n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 seconds with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. PMID:21823134

  9. Electrolyte and plasma enzyme analyses during large-volume liposuction.

    PubMed

    Lipschitz, Avron H; Kenkel, Jeffrey M; Luby, Maureen; Sorokin, Evan; Rohrich, Rod J; Brown, Spencer A

    2004-09-01

    Substantial fluid shifts occur during liposuction as wetting solution is infiltrated subcutaneously and fat is evacuated, causing potential electrolyte imbalances. In the porcine model for large-volume liposuction, plasma aspartate aminotransferase and alanine transaminase levels were elevated following liposuction. These results raised concerns for possible mechanical injury and/or lidocaine-induced hepatocellular toxicity in a clinical setting. The first objective of this human model study was to explore the effect of the liposuction procedure on electrolyte balance. The second objective was to determine whether elevated plasma aminotransferase levels were observed subsequent to large-volume liposuction. Five female volunteers underwent three-stage, ultrasound-assisted liposuction. Blood samples were collected perioperatively. Plasma levels of sodium, potassium, venous carbon dioxide, blood urea nitrogen, chloride, and creatinine were determined. Liver function analyte levels were measured, including albumin, total protein, aspartate aminotransferase, and alanine transaminase, alkaline phosphatase, gamma-glutamyl transpeptidase, and total bilirubin. To further define intracellular enzyme release, creatine kinase levels were measured. Mild hyponatremia was evident postoperatively (134 to 136 mmol/liter) in four patients. Hypokalemia was evident intraoperatively in all subjects (mean +/- SEM; 3.3 +/- 0.16 mmol/liter; range, 3.0 to 3.4 mmol/liter). Hypoalbuminemia and hypoproteinemia were observed throughout the study (baseline: 2.9 +/- 0.2 g/dl; range, 2.6 to 3.5 g/dl), decreasing to 10 to 40 percent 24 hours postoperatively (2.0 +/- 0.2 g/dl; range, 1.7 to 2.1 g/dl). Aspartate aminotransferase, alanine transaminase, and creatine kinase levels were significantly elevated after the procedure (190 +/- 47.1 U/liter, 50 +/- 7.7 U/liter, and 11,219 +/- 2556.7 U/liter, respectively) (p < 0.01). Release of antidiuretic hormone and even mildly hypotonic intravenous fluid

  10. Large-scale Intelligent Transporation Systems simulation

    SciTech Connect

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  11. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  12. Large eddy simulations of laminar separation bubble

    NASA Astrophysics Data System (ADS)

    Cadieux, Francois

    The flow over blades and airfoils at moderate angles of attack and Reynolds numbers ranging from ten thousand to a few hundred thousands undergoes separation due to the adverse pressure gradient generated by surface curvature. In many cases, the separated shear layer then transitions to turbulence and reattaches, closing off a recirculation region -- the laminar separation bubble. To avoid body-fitted mesh generation problems and numerical issues, an equivalent problem for flow over a flat plate is formulated by imposing boundary conditions that lead to a pressure distribution and Reynolds number that are similar to those on airfoils. Spalart & Strelet (2000) tested a number of Reynolds-averaged Navier-Stokes (RANS) turbulence models for a laminar separation bubble flow over a flat plate. Although results with the Spalart-Allmaras turbulence model were encouraging, none of the turbulence models tested reliably recovered time-averaged direct numerical simulation (DNS) results. The purpose of this work is to assess whether large eddy simulation (LES) can more accurately and reliably recover DNS results using drastically reduced resolution -- on the order of 1% of DNS resolution which is commonly achievable for LES of turbulent channel flows. LES of a laminar separation bubble flow over a flat plate are performed using a compressible sixth-order finite-difference code and two incompressible pseudo-spectral Navier-Stokes solvers at resolutions corresponding to approximately 3% and 1% of the chosen DNS benchmark by Spalart & Strelet (2000). The finite-difference solver is found to be dissipative due to the use of a stability-enhancing filter. Its numerical dissipation is quantified and found to be comparable to the average eddy viscosity of the dynamic Smagorinsky model, making it difficult to separate the effects of filtering versus those of explicit subgrid-scale modeling. The negligible numerical dissipation of the pseudo-spectral solvers allows an unambiguous

  13. Volumetric leak detection in large underground storage tanks. Volume 1

    SciTech Connect

    Starr, J.W.; Wise, R.F.; Maresca, J.W.

    1991-08-01

    A set of experiments was conducted to determine whether volumetric leak detection system presently used to test underground storage tanks (USTs) up to 38,000 L (10,000 gal) in capacity could meet EPA's regulatory standards for tank tightness and automatic tank gauging systems when used to test tanks up to 190,000 L (50,000 gal) in capacity. The experiments, conducted on two partially filled 190,000-L (50,000-gal) USTs at Griffiss Air Force Base in upstate New York during late August 1990, showed that a system's performance in large tanks depends primarily on the accuracy of the temperature compensation, which is inversely proportional to the volume of product in the tank. Errors in temperature compensation that were negligible in tests in small tanks were important in large tanks. The experiments further suggest that a multiple-test strategy is also required.

  14. Large Eddy Simulation of Powered Fontan Hemodynamics

    PubMed Central

    Delorme, Y.; Anupindi, K.; Kerlo, A.E.; Shetty, D.; Rodefeld, M.; Chen, J.; Frankel, S.

    2012-01-01

    Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2–3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3–5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a “biventricular Fontan” circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo™) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data. PMID:23177085

  15. Large eddy simulation of powered Fontan hemodynamics.

    PubMed

    Delorme, Y; Anupindi, K; Kerlo, A E; Shetty, D; Rodefeld, M; Chen, J; Frankel, S

    2013-01-18

    Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2-3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3-5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a "biventricular Fontan" circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo(TM)) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data.

  16. Analysis of errors occurring in large eddy simulation.

    PubMed

    Geurts, Bernard J

    2009-07-28

    We analyse the effect of second- and fourth-order accurate central finite-volume discretizations on the outcome of large eddy simulations of homogeneous, isotropic, decaying turbulence at an initial Taylor-Reynolds number Re(lambda)=100. We determine the implicit filter that is induced by the spatial discretization and show that a higher order discretization also induces a higher order filter, i.e. a low-pass filter that keeps a wider range of flow scales virtually unchanged. The effectiveness of the implicit filtering is correlated with the optimal refinement strategy as observed in an error-landscape analysis based on Smagorinsky's subfilter model. As a point of reference, a finite-volume method that is second-order accurate for both the convective and the viscous fluxes in the Navier-Stokes equations is used. We observe that changing to a fourth-order accurate convective discretization leads to a higher value of the Smagorinsky coefficient C(S) required to achieve minimal total error at given resolution. Conversely, changing only the viscous flux discretization to fourth-order accuracy implies that optimal simulation results are obtained at lower values of C(S). Finally, a fully fourth-order discretization yields an optimal C(S) that is slightly lower than the reference fully second-order method.

  17. Flight Simulation Model Exchange. Volume 2; Appendices

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the appendices to the main report.

  18. Flight Simulation Model Exchange. Volume 1

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the results of the assessment.

  19. Ultra-rapid formation of large volumes of evolved magma

    NASA Astrophysics Data System (ADS)

    Michaut, C.; Jaupart, C.

    2006-10-01

    We discuss evidence for, and evaluate the consequences of, the growth of magma reservoirs by small increments of thin (⋍ 1-2 m) sills. For such thin units, cooling proceeds faster than the nucleation and growth of crystals, which only allows a small amount of crystallization and leads to the formation of large quantities of glass. The heat balance equation for kinetic-controlled crystallization is solved numerically for a range of sill thicknesses, magma injection rates and crustal emplacement depths. Successive injections lead to the accumulation of poorly crystallized chilled magma with the properties of a solid. Temperatures increase gradually with each injection until they become large enough to allow a late phase of crystal nucleation and growth. Crystallization and latent heat release work in a positive feedback loop, leading to catastrophic heating of the magma pile, typically by 200 °C in a few decades. Large volumes of evolved melt are made available in a short time. The time for the catastrophic heating event varies as Q- 2 , where Q is the average magma injection rate, and takes values in a range of 10 5-10 6 yr for typical geological magma production rates. With this mechanism, storage of large quantities of magma beneath an active volcanic center may escape detection by seismic methods.

  20. Large Eddy Simulation of Transitional Boundary Layer

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Moin, Parviz

    2009-11-01

    A sixth order compact finite difference code is employed to investigate compressible Large Eddy Simulation (LES) of subharmonic transition of a spatially developing zero pressure gradient boundary layer, at Ma = 0.2. The computational domain extends from Rex= 10^5, where laminar blowing and suction excites the most unstable fundamental and sub-harmonic modes, to fully turbulent stage at Rex= 10.1x10^5. Numerical sponges are used in the neighborhood of external boundaries to provide non-reflective conditions. Our interest lies in the performance of the dynamic subgrid scale (SGS) model [1] in the transition process. It is observed that in early stages of transition the eddy viscosity is much smaller than the physical viscosity. As a result the amplitudes of selected harmonics are in very good agreement with the experimental data [2]. The model's contribution gradually increases during the last stages of transition process and the dynamic eddy viscosity becomes fully active and dominant in the turbulent region. Consistent with this trend the skin friction coefficient versus Rex diverges from its laminar profile and converges to the turbulent profile after an overshoot. 1. Moin P. et. al. Phys Fluids A, 3(11), 2746-2757, 1991. 2. Kachanov Yu. S. et. al. JFM, 138, 209-247, 1983.

  1. Turbulence topologies predicted using large eddy simulations

    NASA Astrophysics Data System (ADS)

    Wang, Bing-Chen; Bergstrom, Donald J.; Yin, Jing; Yee, Eugene

    In this paper, turbulence topologies related to the invariants of the resolved velocity gradient and strain rate tensors are studied based on large eddy simulation. The numerical results presented in the paper were obtained using two dynamic models, namely, the conventional dynamic model of Lilly and a recently developed dynamic nonlinear subgrid scale (SGS) model. In contrast to most of the previous research investigations which have mainly focused on isotropic turbulence, the present study examines the influence of near-wall anisotropy on the flow topologies. The SGS effect on the so-called SGS dissipation of the discriminant is examined and it is shown that the SGS stress contributes to the deviation of the flow topology of real turbulence from that of the ideal restricted Euler flow. The turbulence kinetic energy (TKE) transfer between the resolved and subgrid scales of motion is studied, and the forward and backward scatters of TKE are quantified in the invariant phase plane. Some interesting phenomenological results have also been obtained, including a wing-shaped contour pattern for the density of the resolved enstrophy generation and the near-wall dissipation shift of the peak location (mode) in the joint probability density function of the invariants of the resolved strain rate tensor. The newly observed turbulence phenomenologies are believed to be important and an effort has been made to explain them on an analytical basis.

  2. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    PubMed Central

    Zhao, Xing; Hu, Jing-jing; Zhang, Peng

    2009-01-01

    Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs) has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed in this paper. This method divides both projection data and reconstructed image volume into subsets according to geometric symmetries in circular cone-beam projection layout, and a fast reconstruction for large data volume can be implemented by packing the subsets of projection data into the RGBA channels of GPU, performing the reconstruction chunk by chunk and combining the individual results in the end. The method is evaluated by reconstructing 3D images from computer-simulation data and real micro-CT data. Our results indicate that the GPU implementation can maintain original precision and speed up the reconstruction process by 110–120 times for circular cone-beam scan, as compared to traditional CPU implementation. PMID:19730744

  3. Improved HF Data Network Simulator. Volume 1

    DTIC Science & Technology

    1993-07-01

    flares - may cause HF blackouts, as can large terrestrial events such as volcanic eruptions and atomic explosions. The ionosphere exhibits a remarkable...of the earth interacts with the solar wind, causing rapid changes in the ionosphere that are made visible in part by the aurora borealis. The effects...backscatter - unpredictable changes in refraction from sporadic-E and F layers - excess path delays caused by non-great-circle modes propagating via

  4. Cardiovascular simulator improvement: pressure versus volume loop assessment.

    PubMed

    Fonseca, Jeison; Andrade, Aron; Nicolosi, Denys E C; Biscegli, José F; Leme, Juliana; Legendre, Daniel; Bock, Eduardo; Lucchi, Julio Cesar

    2011-05-01

    This article presents improvement on a physical cardiovascular simulator (PCS) system. Intraventricular pressure versus intraventricular volume (PxV) loop was obtained to evaluate performance of a pulsatile chamber mimicking the human left ventricle. PxV loop shows heart contractility and is normally used to evaluate heart performance. In many heart diseases, the stroke volume decreases because of low heart contractility. This pathological situation must be simulated by the PCS in order to evaluate the assistance provided by a ventricular assist device (VAD). The PCS system is automatically controlled by a computer and is an auxiliary tool for VAD control strategies development. This PCS system is according to a Windkessel model where lumped parameters are used for cardiovascular system analysis. Peripheral resistance, arteries compliance, and fluid inertance are simulated. The simulator has an actuator with a roller screw and brushless direct current motor, and the stroke volume is regulated by the actuator displacement. Internal pressure and volume measurements are monitored to obtain the PxV loop. Left chamber internal pressure is directly obtained by pressure transducer; however, internal volume has been obtained indirectly by using a linear variable differential transformer, which senses the diaphragm displacement. Correlations between the internal volume and diaphragm position are made. LabVIEW integrates these signals and shows the pressure versus internal volume loop. The results that have been obtained from the PCS system show PxV loops at different ventricle elastances, making possible the simulation of pathological situations. A preliminary test with a pulsatile VAD attached to PCS system was made.

  5. SUSY’s Ladder: Reframing sequestering at Large Volume

    DOE PAGES

    Reece, Matthew; Xue, Wei

    2016-04-07

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague othermore » supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.« less

  6. SUSY’s Ladder: Reframing sequestering at Large Volume

    SciTech Connect

    Reece, Matthew; Xue, Wei

    2016-04-07

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague other supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.

  7. Large volume water sprays for dispersing warm fogs

    NASA Astrophysics Data System (ADS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.

    A new method for dispersing of warm fogs which impede visibility and alter schedules is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray-induced air flow; the fog droplets are removed by coalescence/rainout. The efficiency of this fog droplet removal process depends on the size spectra of the spray drops and optimum spray drop size is calculated as between 0.3-1.0 mm in diameter. Water spray tests were conducted in order to determine the drop size spectra and temperature response of sprays produced by commercially available fire-fighting nozzles, and nozzle array tests were utilized to study air flow patterns and the thermal properties of the overall system. The initial test data reveal that the fog-dispersal procedure is effective.

  8. High density three-dimensional localization microscopy across large volumes

    PubMed Central

    Legant, Wesley R.; Shao, Lin; Grimm, Jonathan B.; Brown, Timothy A.; Milkie, Daniel E.; Avants, Brian B.; Lavis, Luke D.; Betzig, Eric

    2016-01-01

    Extending three-dimensional (3D) single molecule localization microscopy away from the coverslip and into thicker specimens will greatly broaden its biological utility. However, localizing molecules in 3D with high precision in such samples, while simultaneously achieving the extreme labeling densities required for high resolution of densely crowded structures is challenging due to the limitations both of conventional imaging modalities and of conventional labeling techniques. Here, we combine lattice light sheet microscopy with newly developed, freely diffusing, cell permeable chemical probes with targeted affinity towards either DNA, intracellular membranes, or the plasma membrane. We use this combination to perform high localization precision, ultra-high labeling density, multicolor localization microscopy in samples up to 20 microns thick, including dividing cells and the neuromast organ of a zebrafish embryo. We also demonstrate super-resolution correlative imaging with protein specific photoactivable fluorophores, providing a mutually compatible, single platform alternative to correlative light-electron microscopy over large volumes. PMID:26950745

  9. Large space telescope, phase A. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Phase A study of the Large Space Telescope (LST) is reported. The study defines an LST concept based on the broad mission guidelines provided by the Office of Space Science (OSS), the scientific requirements developed by OSS with the scientific community, and an understanding of long range NASA planning current at the time the study was performed. The LST is an unmanned astronomical observatory facility, consisting of an optical telescope assembly (OTA), scientific instrument package (SIP), and a support systems module (SSM). The report consists of five volumes. The report describes the constraints and trade off analyses that were performed to arrive at a reference design for each system and for the overall LST configuration. A low cost design approach was followed in the Phase A study. This resulted in the use of standard spacecraft hardware, the provision for maintenance at the black box level, growth potential in systems designs, and the sharing of shuttle maintenance flights with other payloads.

  10. Multisystem organ failure after large volume injection of castor oil.

    PubMed

    Smith, Silas W; Graber, Nathan M; Johnson, Rudolph C; Barr, John R; Hoffman, Robert S; Nelson, Lewis S

    2009-01-01

    We report a case of multisystem organ failure after large volume subcutaneous injection of castor oil for cosmetic enhancement. An unlicensed practitioner injected 500 mL of castor oil bilaterally to the hips and buttocks of a 28-year-old male to female transsexual. Immediate local pain and erythema were followed by abdominal and chest pain, emesis, headache, hematuria, jaundice, and tinnitus. She presented to an emergency department 12 hours postinjection. Persistently hemolyzed blood samples complicated preliminary laboratory analysis. She rapidly deteriorated despite treatment and developed fever, tachycardia, hemolysis, thrombocytopenia, hepatitis, respiratory distress, and anuric renal failure. An infectious diseases evaluation was negative. After intensive supportive care, including mechanical ventilation and hemodialysis, she was discharged 11 days later, requiring dialysis for an additional 1.5 months. Castor oil absorption was inferred from recovery of the Ricinus communis biomarker, ricinine, in the patient's urine (41 ng/mL). Clinicians should anticipate multiple complications after unapproved methods of cosmetic enhancement.

  11. Striped Bass, morone saxatilis, egg incubation in large volume jars

    USGS Publications Warehouse

    Harper, C.J.; Wrege, B.M.; Jeffery, Isely J.

    2010-01-01

    The standard McDonald jar was compared with a large volume jar for striped bass, Morone saxatilis, egg incubation. The McDonald jar measured 16 cm in diameter by 45 cm in height and had a volume of 6 L. The experimental jar measured 0.4 m in diameter by 1.3 m in height and had a volume of 200 L. The hypothesis is that there is no difference in percent survival of fry hatched in experimental jars compared with McDonald jars. Striped bass brood fish were collected from the Coosa River and spawned using the dry spawn method of fertilization. Four McDonald jars were stocked with approximately 150 g of eggs each. Post-hatch survival was estimated at 48, 96, and 144 h. Stocking rates resulted in an average egg loading rate (??1 SE) in McDonald jars of 21.9 ?? 0.03 eggs/mL and in experimental jars of 10.9 ?? 0.57 eggs/mL. The major finding of this study was that average fry survival was 37.3 ?? 4.49% for McDonald jars and 34.2 ?? 3.80% for experimental jars. Although survival in experimental jars was slightly less than in McDonald jars, the effect of container volume on survival to 48 h (F = 6.57; df = 1,5; P > 0.05), 96 h (F = 0.02; df = 1, 4; P > 0.89), and 144 h (F = 3.50; df = 1, 4; P > 0.13) was not statistically significant. Mean survival between replicates ranged from 14.7 to 60.1% in McDonald jars and from 10.1 to 54.4% in experimental jars. No effect of initial stocking rate on survival (t = 0.06; df = 10; P > 0.95) was detected. Experimental jars allowed for incubation of a greater number of eggs in less than half the floor space of McDonald jars. As hatchery production is often limited by space or water supply, experimental jars offer an alternative to extending spawning activities, thereby reducing labor and operations cost. As survival was similar to McDonald jars, the experimental jar is suitable for striped bass egg incubation. ?? Copyright by the World Aquaculture Society 2010.

  12. Large eddy simulation of soot evolution in an aircraft combustor

    NASA Astrophysics Data System (ADS)

    Mueller, Michael E.; Pitsch, Heinz

    2013-11-01

    An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel

  13. Volume visualization of multiple alignment of large genomicDNA

    SciTech Connect

    Shah, Nameeta; Dillard, Scott E.; Weber, Gunther H.; Hamann, Bernd

    2005-07-25

    Genomes of hundreds of species have been sequenced to date, and many more are being sequenced. As more and more sequence data sets become available, and as the challenge of comparing these massive ''billion basepair DNA sequences'' becomes substantial, so does the need for more powerful tools supporting the exploration of these data sets. Similarity score data used to compare aligned DNA sequences is inherently one-dimensional. One-dimensional (1D) representations of these data sets do not effectively utilize screen real estate. As a result, tools using 1D representations are incapable of providing informatory overview for extremely large data sets. We present a technique to arrange 1D data in 3D space to allow us to apply state-of-the-art interactive volume visualization techniques for data exploration. We demonstrate our technique using multi-millions-basepair-long aligned DNA sequence data and compare it with traditional 1D line plots. The results show that our technique is superior in providing an overview of entire data sets. Our technique, coupled with 1D line plots, results in effective multi-resolution visualization of very large aligned sequence data sets.

  14. Computer simulation of preflight blood volume reduction as a countermeasure to fluid shifts in space flight

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R.; Charles, J. B.

    1992-01-01

    Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.

  15. Monte Carlo Simulations for Dosimetry in Prostate Radiotherapy with Different Intravesical Volumes and Planning Target Volume Margins

    PubMed Central

    Lv, Wei; Yu, Dong; He, Hengda; Liu, Qian

    2016-01-01

    In prostate radiotherapy, the influence of bladder volume variation on the dose absorbed by the target volume and organs at risk is significant and difficult to predict. In addition, the resolution of a typical medical image is insufficient for visualizing the bladder wall, which makes it more difficult to precisely evaluate the dose to the bladder wall. This simulation study aimed to quantitatively investigate the relationship between the dose received by organs at risk and the intravesical volume in prostate radiotherapy. The high-resolution Visible Chinese Human phantom and the finite element method were used to construct 10 pelvic models with specific intravesical volumes ranging from 100 ml to 700 ml to represent bladders of patients with different bladder filling capacities during radiotherapy. This series of models was utilized in six-field coplanar 3D conformal radiotherapy simulations with different planning target volume (PTV) margins. Each organ’s absorbed dose was calculated using the Monte Carlo method. The obtained bladder wall displacements during bladder filling were consistent with reported clinical measurements. The radiotherapy simulation revealed a linear relationship between the dose to non-targeted organs and the intravesical volume and indicated that a 10-mm PTV margin for a large bladder and a 5-mm PTV margin for a small bladder reduce the effective dose to the bladder wall to similar degrees. However, larger bladders were associated with evident protection of the intestines. Detailed dosimetry results can be used by radiation oncologists to create more accurate, individual water preload protocols according to the patient’s anatomy and bladder capacity. PMID:27441944

  16. Large-scale mass distribution in the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Haider, M.; Steinhauser, D.; Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Hernquist, L.

    2016-04-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 per cent of the dark matter and 23 per cent of the baryons are within haloes more massive than the resolution limit of 2 × 108 M⊙. The filaments of the cosmic web host a further 45 per cent of the dark matter and 46 per cent of the baryons. The remaining 31 per cent of the baryons reside in voids. The majority of these baryons have been transported there through active galactic nuclei feedback. We note that the feedback model of Illustris is too strong for heavy haloes, therefore it is likely that we are overestimating this amount. Categorizing the baryons according to their density and temperature, we find that 17.8 per cent of them are in a condensed state, 21.6 per cent are present as cold, diffuse gas, and 53.9 per cent are found in the state of a warm-hot intergalactic medium.

  17. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  18. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    SciTech Connect

    Ebrahimi, F.; Raman, R.

    2016-03-23

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. Furthermore, these results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  19. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE PAGES

    Ebrahimi, F.; Raman, R.

    2016-03-23

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form amore » narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. Furthermore, these results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.« less

  20. Testing large volume water treatment and crude oil ...

    EPA Pesticide Factsheets

    Report EPA’s Homeland Security Research Program (HSRP) partnered with the Idaho National Laboratory (INL) to build the Water Security Test Bed (WSTB) at the INL test site outside of Idaho Falls, Idaho. The WSTB was built using an 8-inch (20 cm) diameter cement-mortar lined drinking water pipe that was previously taken out of service. The pipe was exhumed from the INL grounds and oriented in the shape of a small drinking water distribution system. Effluent from the pipe is captured in a lagoon. The WSTB can support drinking water distribution system research on a variety of drinking water treatment topics including biofilms, water quality, sensors, and homeland security related contaminants. Because the WSTB is constructed of real drinking water distribution system pipes, research can be conducted under conditions similar to those in a real drinking water system. In 2014, WSTB pipe was experimentally contaminated with Bacillus globigii spores, a non-pathogenic surrogate for the pathogenic B. anthracis, and then decontaminated using chlorine dioxide. In 2015, the WSTB was used to perform the following experiments: • Four mobile disinfection technologies were tested for their ability to disinfect large volumes of biologically contaminated “dirty” water from the WSTB. B. globigii spores acted as the biological contaminant. The four technologies evaluated included: (1) Hayward Saline C™ 6.0 Chlorination System, (2) Advanced Oxidation Process (A

  1. Study of Hydrokinetic Turbine Arrays with Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Sale, Danny; Aliseda, Alberto

    2014-11-01

    Marine renewable energy is advancing towards commercialization, including electrical power generation from ocean, river, and tidal currents. The focus of this work is to develop numerical simulations capable of predicting the power generation potential of hydrokinetic turbine arrays-this includes analysis of unsteady and averaged flow fields, turbulence statistics, and unsteady loadings on turbine rotors and support structures due to interaction with rotor wakes and ambient turbulence. The governing equations of large-eddy-simulation (LES) are solved using a finite-volume method, and the presence of turbine blades are approximated by the actuator-line method in which hydrodynamic forces are projected to the flow field as a body force. The actuator-line approach captures helical wake formation including vortex shedding from individual blades, and the effects of drag and vorticity generation from the rough seabed surface are accounted for by wall-models. This LES framework was used to replicate a previous flume experiment consisting of three hydrokinetic turbines tested under various operating conditions and array layouts. Predictions of the power generation, velocity deficit and turbulence statistics in the wakes are compared between the LES and experimental datasets.

  2. Progress in the Variational Multiscale Formulation of Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Oberai, Assad

    2007-11-01

    In the variational multiscale (VMS) formulation of large eddy simulation subgrid models are introduced in the variational (or weak) formulation of the Navier Stokes equations and a-priori scale separation is accomplished using projection operators to create coarse and fine scales. This separation also leads to two sets of evolution equations: one for the coarse scales and another for the fine scales. The coarse scale equations are solved numerically while the fine scale equations are solved analytically to obtain an expression for the fine scales in terms of the coarse scales and hence achieve closure. Till date, the VMS formulation has lead to accurate results in the simulation of canonical turbulent flow problems. It has been implemented using spectral, finite element and finite volume methods. In this talk, for the incompressible Navier Stokes equations, we willpresent some new ideas for modeling the fine scales within the context of the VMS formulation and discuss their impact on the coarse scale solution. We will present a simple residual-based approximation for the fine scales that accurately models the cross-stress term and demonstrate that when this term is append with an eddy viscosity model for the Reynolds stress, a new mixed-model is obtained. The application of these ideas will be illustrated through some simple numerical examples.

  3. Large Eddy Simulation of Supersonic Inlet Flows

    DTIC Science & Technology

    1998-04-01

    SIMULATION OF SUPERSONIC INLET FLOWS 6. AUTHOR(S) PROF. PARVIZ MOIN PROF. SANJIVA K. LELE 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) STANFORD... Parviz Moin and Sanjiva K. Lele Stanford University Mechanical Engineering, Flow Physics & Computation Division Stanford, CA 94305-3030 Prepared...monitor. I am thankful to Professor Sanjiva Lele and Profes- sor Parviz Moin, and Keith Lucas for useful discussions! I am grateful to Professor Peter

  4. The UPSCALE project: a large simulation campaign

    NASA Astrophysics Data System (ADS)

    Mizielinski, Matthew; Roberts, Malcolm; Vidale, Pier Luigi; Schiemann, Reinhard; Demory, Marie-Estelle; Strachan, Jane

    2014-05-01

    The development of a traceable hierarchy of HadGEM3 global climate models, based upon the Met Office Unified Model, at resolutions from 135 km to 25 km, now allows the impact of resolution on the mean state, variability and extremes of climate to be studied in a robust fashion. In 2011 we successfully obtained a single-year grant of 144 million core hours of supercomputing time from the PRACE organization to run ensembles of 27 year atmosphere-only (HadGEM3-A GA3.0) climate simulations at 25km resolution, as used in present global weather forecasting, on HERMIT at HLRS. Through 2012 the UPSCALE project (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) ran over 650 years of simulation at resolutions of 25 km (N512), 60 km (N216) and 135 km (N96) to look at the value of high resolution climate models in the study of both present climate and a potential future climate scenario based on RCP8.5. Over 400 TB of data was produced using HERMIT, with additional simulations run on HECToR (UK supercomputer) and MONSooN (Met Office NERC Supercomputing Node). The data generated was transferred to the JASMIN super-data cluster, hosted by STFC CEDA in the UK, where analysis facilities are allowing rapid scientific exploitation of the data set. Many groups across the UK and Europe are already taking advantage of these facilities and we welcome approaches from other interested scientists. This presentation will briefly cover the following points; Purpose and requirements of the UPSCALE project and facilities used. Technical implementation and hurdles (model porting and optimisation, automation, numerical failures, data transfer). Ensemble specification. Current analysis projects and access to the data set. A full description of UPSCALE and the data set generated has been submitted to Geoscientific Model development, with overview information available from http://proj.badc.rl.ac.uk/upscale .

  5. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.

  6. Floating substructure flexibility of large-volume 10MW offshore wind turbine platforms in dynamic calculations

    NASA Astrophysics Data System (ADS)

    Borg, Michael; Melchior Hansen, Anders; Bredmose, Henrik

    2016-09-01

    Designing floating substructures for the next generation of 10MW and larger wind turbines has introduced new challenges in capturing relevant physical effects in dynamic simulation tools. In achieving technically and economically optimal floating substructures, structural flexibility may increase to the extent that it becomes relevant to include in addition to the standard rigid body substructure modes which are typically described through linear radiation-diffraction theory. This paper describes a method for the inclusion of substructural flexibility in aero-hydro-servo-elastic dynamic simulations for large-volume substructures, including wave-structure interactions, to form the basis of deriving sectional loads and stresses within the substructure. The method is applied to a case study to illustrate the implementation and relevance. It is found that the flexible mode is significantly excited in an extreme event, indicating an increase in predicted substructure internal loads.

  7. Feasibility of large volume tumor ablation using multiple-mode strategy with fast scanning method: A numerical study

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Shen, Guofeng; Qiao, Shan; Chen, Yazhu

    2017-03-01

    Sonication with fast scanning method can generate homogeneous lesions without complex planning. But when the target region is large, switching focus too fast will reduce the heat accumulation, the margin of which may not ablated. Furthermore, high blood perfusion rate will reduce this maximum volume that can be ablated. Therefore, fast scanning method may not be applied to large volume tumor. To expand the therapy scope, this study combines the fast scan method with multiple mode strategy. Through simulation and experiment, the feasibility of this new strategy is evaluated and analyzed.

  8. An Ultrascalable Solution to Large-scale Neural Tissue Simulation

    PubMed Central

    Kozloski, James; Wagner, John

    2011-01-01

    Neural tissue simulation extends requirements and constraints of previous neuronal and neural circuit simulation methods, creating a tissue coordinate system. We have developed a novel tissue volume decomposition, and a hybrid branched cable equation solver. The decomposition divides the simulation into regular tissue blocks and distributes them on a parallel multithreaded machine. The solver computes neurons that have been divided arbitrarily across blocks. We demonstrate thread, strong, and weak scaling of our approach on a machine with more than 4000 nodes and up to four threads per node. Scaling synapses to physiological numbers had little effect on performance, since our decomposition approach generates synapses that are almost always computed locally. The largest simulation included in our scaling results comprised 1 million neurons, 1 billion compartments, and 10 billion conductance-based synapses and gap junctions. We discuss the implications of our ultrascalable Neural Tissue Simulator, and with our results estimate requirements for a simulation at the scale of a human brain. PMID:21954383

  9. Large-Scale Hybrid Dynamic Simulation Employing Field Measurements

    SciTech Connect

    Huang, Zhenyu; Guttromson, Ross T.; Hauer, John F.

    2004-06-30

    Simulation and measurements are two primary ways for power engineers to gain understanding of system behaviors and thus accomplish tasks in system planning and operation. Many well-developed simulation tools are available in today's market. On the other hand, large amount of measured data can be obtained from traditional SCADA systems and currently fast growing phasor networks. However, simulation and measurement are still two separate worlds. There is a need to combine the advantages of simulation and measurements. In view of this, this paper proposes the concept of hybrid dynamic simulation which opens up traditional simulation by providing entries for measurements. A method is presented to implement hybrid simulation with PSLF/PSDS. Test studies show the validity of the proposed hybrid simulation method. Applications of such hybrid simulation include system event playback, model validation, and software validation.

  10. Vermont Yankee simulator qualification: large-break LOCA

    SciTech Connect

    Loomis, J.N.; Fernandez, R.T.

    1987-01-01

    Yankee Atomic Electric Company (YAEC) has developed simulator benchmark capabilities for the Seabrook, Maine Yankee, and Vermont Yankee Nuclear Power Station (VYNPS) simulators. The goal is to establish that each simulator has a satisfactory real-time response for different scenarios that will enhance operator training. Vermont Yankee purchased a full-scope plane simulator for the VYNPS, a four-unit boiling water reactor with a Mark-I containment. The following seven benchmark cases were selected by YAEC and VYNPC to supplement the Simulator Acceptance Test Program: (1) control rod swap; (2) partial reactor scram; (3) recirculation pump trip; (4) main steam isolation valve (MSIV) closure without scram, (5) main steamline break, (6) small-break loss-of-coolant accident (LOCA), and (7) large-break LOCA. Five simulator benchmark sessions have been completed. Each session identified simulator capabilities and limitations that needed correction. This paper discusses results from the latest large-break LOCA case.

  11. Large-Eddy Simulation of Wind-Plant Aerodynamics

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.

  12. Numerical simulations of volume holographic imaging system resolution characteristics

    NASA Astrophysics Data System (ADS)

    Sun, Yajun; Jiang, Zhuqing; Liu, Shaojie; Tao, Shiquan

    2009-05-01

    Because of the Bragg selectivity of volume holographic gratings, it helps VHI system to optically segment the object space. In this paper, properties of point-source diffraction imaging in terms of the point-spread function (PSF) are investigated, and characteristics of depth and lateral resolutions in a VHI system is numerically simulated. The results show that the observed diffracted field obviously changes with the displacement in the z direction, and is nearly unchanged with displacement in the x and y directions. The dependence of the diffracted imaging field on the z-displacement provides a way to possess 3-D image by VHI.

  13. Development of large volume double ring penning plasma discharge source for efficient light emissions.

    PubMed

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj; Prajapati, Jitendra; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana

    2012-12-01

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd(2)Fe(14)B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic field of ~0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is ~2 × 10(11) cm(-3), which is around one order of magnitude larger than that of single ring arrangement.

  14. Development of large volume double ring penning plasma discharge source for efficient light emissions

    SciTech Connect

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj; Prajapati, Jitendra; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana

    2012-12-15

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd{sub 2}Fe{sub 14}B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic field of {approx}0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is {approx}2 Multiplication-Sign 10{sup 11} cm{sup -3}, which is around one order of magnitude larger than that of single ring arrangement.

  15. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  16. Large Scale Simulations of the Kinetic Ising Model

    NASA Astrophysics Data System (ADS)

    Münkel, Christian

    We present Monte Carlo simulation results for the dynamical critical exponent z of the two- and three-dimensional kinetic Ising model. The z-values were calculated from the magnetization relaxation from an ordered state into the equilibrium state at Tc for very large systems with up to (169984)2 and (3072)3 spins. To our knowledge, these are the largest Ising-systems simulated todate. We also report the successful simulation of very large lattices on a massively parallel MIMD computer with high speedups of approximately 1000 and an efficiency of about 0.93.

  17. Climate Simulations with an Isentropic Finite Volume Dynamical Core

    SciTech Connect

    Chen, Chih-Chieh; Rasch, Philip J.

    2012-04-15

    This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite volume dynamical core of the Community Atmosphere Model (CAM). Results from a 20-year climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM which both use a pressure-based ({sigma}-p) coordinate. The same physical parameterization package is employed in all three dynamical cores. The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heat transport processes in the atmosphere leading to much improved atmospheric temperatures. We show that the isentropic model is very effective in reducing the long standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea-level pressure field in the northern hemisphere.

  18. Large volume liquid helium relief device verifacation apparatus for the alpha magnetic spectrometer

    NASA Astrophysics Data System (ADS)

    Klimas, Richard John; McIntyre, P.; Colvin, John; Zeigler, John; Van Sciver, Steven; Ting, Samual

    2012-06-01

    Here we present details of an experiment for verifying the liquid helium vessel relief device for the Alpha Magnetic Spectrometer-02 (AMS-02). The relief device utilizes a series of rupture discs designed to open in the event of a vacuum failure of the AMS-02 cryogenic system. A failure of this type is classified to be a catastrophic loss of insulating vacuum accident. This apparatus differs from other approaches due to the size of the test volumes used. The verification apparatus consists of a 250 liter vessel used for the test quantity of liquid helium that is located inside a vacuum insulated vessel. A large diameter valve is suddenly opened to simulate the loss of insulating vacuum in a repeatable manner. Pressure and temperature vs. time data are presented and discussed in the context of the AMS-02 hardware configuration.

  19. New material model for simulating large impacts on rocky bodies

    NASA Astrophysics Data System (ADS)

    Tonge, A.; Barnouin, O.; Ramesh, K.

    2014-07-01

    Large impact craters on an asteroid can provide insights into its internal structure. These craters can expose material from the interior of the body at the impact site [e.g., 1]; additionally, the impact sends stress waves throughout the body, which interrogate the asteroid's interior. Through a complex interplay of processes, such impacts can result in a variety of motions, the consequence of which may appear as lineaments that are exposed over all or portions of the asteroid's surface [e.g., 2,3]. While analytic, scaling, and heuristic arguments can provide some insight into general phenomena on asteroids, interpreting the results of a specific impact event, or series of events, on a specific asteroid geometry generally necessitates the use of computational approaches that can solve for the stress and displacement history resulting from an impact event. These computational approaches require a constitutive model for the material, which relates the deformation history of a small material volume to the average force on the boundary of that material volume. In this work, we present a new material model that is suitable for simulating the failure of rocky materials during impact events. This material model is similar to the model discussed in [4]. The new material model incorporates dynamic sub-scale crack interactions through a micro-mechanics-based damage model, thermodynamic effects through the use of a Mie-Gruneisen equation of state, and granular flow of the fully damaged material. The granular flow model includes dilatation resulting from the mutual interaction of small fragments of material (grains) as they are forced to slide and roll over each other and includes a P-α type porosity model to account for compaction of the granular material in a subsequent impact event. The micro-mechanics-based damage model provides a direct connection between the flaw (crack) distribution in the material and the rate-dependent strength. By connecting the rate

  20. Modification of a very large thermal-vacuum test chamber for ionosphere and plasmasphere simulation

    NASA Technical Reports Server (NTRS)

    Pearson, O. L.

    1978-01-01

    No large-volume chamber existed which could simulate the ion and electron environment of near-earth space. A very large thermal-vacuum chamber was modified to provide for the manipulation of the test volume magnetic field and for the generation and monitoring of plasma. Plasma densities of 1 million particles per cu cm were generated in the chamber where a variable magnetic flux density of up to 0.00015 T (1.5 gauss) was produced. Plasma temperature, density, composition, and visual effects were monitored, and plasma containment and control were investigated. Initial operation of the modified chamber demonstrated a capability satisfactory for a wide variety of experiments and hardware tests which require an interaction with the plasma environment. Potential for improving the quality of the simulation exists.

  1. Large Eddy Simulation of Pollen Transport in the Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Chamecki, Marcelo; Meneveau, Charles; Parlange, Marc B.

    2007-11-01

    The development of genetically modified crops and questions about cross-pollination and contamination of natural plant populations enhanced the importance of understanding wind dispersion of airborne pollen. The main objective of this work is to simulate the dispersal of pollen grains in the atmospheric surface layer using large eddy simulation. Pollen concentrations are simulated by an advection-diffusion equation including gravitational settling. Of great importance is the specification of the bottom boundary conditions characterizing the pollen source over the canopy and the deposition process everywhere else. The velocity field is discretized using a pseudospectral approach. However the application of the same discretization scheme to the pollen equation generates unphysical solutions (i.e. negative concentrations). The finite-volume bounded scheme SMART is used for the pollen equation. A conservative interpolation scheme to determine the velocity field on the finite volume surfaces was developed. The implementation is validated against field experiments of point source and area field releases of pollen.

  2. Large-volume leukaphereses may be more efficient than standard-volume leukaphereses for collection of peripheral blood progenitor cells.

    PubMed

    Passos-Coelho, J L; Machado, M A; Lúcio, P; Leal-Da-Costa, F; Silva, M R; Parreira, A

    1997-10-01

    To overcome the need for multiple leukaphereses to collect enough PBPC for autologous transplantation, large-volume leukaphereses (LVL) are used to process multiple blood volumes per session. We compared the efficiency of CD34+ cell collection by LVL (n = 63; median blood volumes processed 11.1) with that of standard-volume leukaphereses (SVL) (n = 38; median blood volumes processed 1.9). To achieve this in patients with different peripheral blood concentrations of CD34+ cells, we analyzed the ratio of CD34+ cells collected per unit of blood volume processed, divided by the number of CD34+ cells in total blood volume at the beginning of apheresis. For LVL, 30% (9%-323%) of circulating CD34+ cells were collected per blood volume compared with 42% (7%-144%) for SVL (p = 0.02). However, in LVL patients, peripheral blood CD34+ cells/L decreased a median of 54% during LVL (similar data for SVL not available). The number of CD34+ cells collected per blood volume processed after 4 and 8 blood volumes and at the end of LVL were 0.32 (0.01-2.05), 0.24 (0.01-1.68), and 0.22 (0.01-2.40) x 10(6) CD34+ cells/kg, respectively (p = 0.0007), despite the 54% decrease in peripheral blood CD34+ cells/L throughout LVL. A median 66% decrease in the platelet count was also observed during LVL. Thus, LVL may be more efficient than SVL for PBPC collection, allowing, in most patients, the collection in one LVL of sufficient PBPC to support autologous transplantation.

  3. Large space telescope, phase A. Volume 3: Optical telescope assembly

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the optical telescope assembly for the Large Space Telescope are discussed. The systems considerations are based on mission-related parameters and optical equipment requirements. Information is included on: (1) structural design and analysis, (2) thermal design, (3) stabilization and control, (4) alignment, focus, and figure control, (5) electronic subsystem, and (6) scientific instrument design.

  4. RADON DIAGNOSTIC MEASUREMENT GUIDANCE FOR LARGE BUILDINGS - VOLUME 2. APPENDICES

    EPA Science Inventory

    The report discusses the development of radon diagnostic procedures and mitigation strategies applicable to a variety of large non-residential buildings commonly found in Florida. The investigations document and evaluate the nature of radon occurrence and entry mechanisms for rad...

  5. Large space telescope, phase A. Volume 4: Scientific instrument package

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design and characteristics of the scientific instrument package for the Large Space Telescope are discussed. The subjects include: (1) general scientific objectives, (2) package system analysis, (3) scientific instrumentation, (4) imaging photoelectric sensors, (5) environmental considerations, and (6) reliability and maintainability.

  6. Large space telescope, phase A. Volume 5: Support systems module

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the support systems module for the Large Space Telescope are discussed. The following systems and described: (1) thermal control, (2) electrical, (3) communication and data landing, (4) attitude control system, and (5) structural features. Analyses of maintainability and reliability considerations are included.

  7. Constrained Large Eddy Simulation of Separated Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Xia, Zhenhua; Shi, Yipeng; Wang, Jianchun; Xiao, Zuoli; Yang, Yantao; Chen, Shiyi

    2011-11-01

    Constrained Large-eddy Simulation (CLES) has been recently proposed to simulate turbulent flows with massive separation. Different from traditional large eddy simulation (LES) and hybrid RANS/LES approaches, the CLES simulates the whole flow domain by large eddy simulation while enforcing a RANS Reynolds stress constraint on the subgrid-scale (SGS) stress models in the near-wall region. Algebraic eddy-viscosity models and one-equation Spalart-Allmaras (S-A) model have been used to constrain the Reynolds stress. The CLES approach is validated a posteriori through simulation of flow past a circular cylinder and periodic hill flow at high Reynolds numbers. The simulation results are compared with those from RANS, DES, DDES and other available hybrid RANS/LES methods. It is shown that the capability of the CLES method in predicting separated flows is comparable to that of DES. Detailed discussions are also presented about the effects of the RANS models as constraint in the near-wall layers. Our results demonstrate that the CLES method is a promising alternative towards engineering applications.

  8. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-10-20

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048{sup 3} dark matter particles, 2048{sup 3} gas cells, and 17 billion adaptive rays in a L = 100 Mpc h {sup –1} box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h {sup –1}). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h {sup –1}) in order to make mock observations and theoretical predictions.

  9. A finite volume model simulation for the Broughton Archipelago, Canada

    NASA Astrophysics Data System (ADS)

    Foreman, M. G. G.; Czajko, P.; Stucchi, D. J.; Guo, M.

    A finite volume circulation model is applied to the Broughton Archipelago region of British Columbia, Canada and used to simulate the three-dimensional velocity, temperature, and salinity fields that are required by a companion model for sea lice behaviour, development, and transport. The absence of a high resolution atmospheric model necessitated the installation of nine weather stations throughout the region and the development of a simple data assimilation technique that accounts for topographic steering in interpolating/extrapolating the measured winds to the entire model domain. The circulation model is run for the period of March 13-April 3, 2008 and correlation coefficients between observed and model currents, comparisons between model and observed tidal harmonics, and root mean square differences between observed and model temperatures and salinities all showed generally good agreement. The importance of wind forcing in the near-surface circulation, differences between this simulation and one computed with another model, the effects of bathymetric smoothing on channel velocities, further improvements necessary for this model to accurately simulate conditions in May and June, and the implication of near-surface current patterns at a critical location in the 'migration corridor' of wild juvenile salmon, are also discussed.

  10. Real-time visualization of large volume datasets on standard PC hardware.

    PubMed

    Xie, Kai; Yang, Jie; Zhu, Y M

    2008-05-01

    In medical area, interactive three-dimensional volume visualization of large volume datasets is a challenging task. One of the major challenges in graphics processing unit (GPU)-based volume rendering algorithms is the limited size of texture memory imposed by current GPU architecture. We attempt to overcome this limitation by rendering only visible parts of large CT datasets. In this paper, we present an efficient, high-quality volume rendering algorithm using GPUs for rendering large CT datasets at interactive frame rates on standard PC hardware. We subdivide the volume dataset into uniform sized blocks and take advantage of combinations of early ray termination, empty-space skipping and visibility culling to accelerate the whole rendering process and render visible parts of volume data. We have implemented our volume rendering algorithm for a large volume data of 512 x 304 x 1878 dimensions (visible female), and achieved real-time performance (i.e., 3-4 frames per second) on a Pentium 4 2.4GHz PC equipped with NVIDIA Geforce 6600 graphics card ( 256 MB video memory). This method can be used as a 3D visualization tool of large CT datasets for doctors or radiologists.

  11. Large-scale simulations of complex physical systems

    NASA Astrophysics Data System (ADS)

    Belić, A.

    2007-04-01

    Scientific computing has become a tool as vital as experimentation and theory for dealing with scientific challenges of the twenty-first century. Large scale simulations and modelling serve as heuristic tools in a broad problem-solving process. High-performance computing facilities make possible the first step in this process - a view of new and previously inaccessible domains in science and the building up of intuition regarding the new phenomenology. The final goal of this process is to translate this newly found intuition into better algorithms and new analytical results. In this presentation we give an outline of the research themes pursued at the Scientific Computing Laboratory of the Institute of Physics in Belgrade regarding large-scale simulations of complex classical and quantum physical systems, and present recent results obtained in the large-scale simulations of granular materials and path integrals.

  12. Evaluation of Large Volume SrI2(Eu) Scintillator Detectors

    SciTech Connect

    Sturm, B W; Cherepy, N J; Drury, O B; Thelin, P A; Fisher, S E; Magyar, A F; Payne, S A; Burger, A; Boatner, L A; Ramey, J O; Shah, K S; Hawrami, R

    2010-11-18

    There is an ever increasing demand for gamma-ray detectors which can achieve good energy resolution, high detection efficiency, and room-temperature operation. We are working to address each of these requirements through the development of large volume SrI{sub 2}(Eu) scintillator detectors. In this work, we have evaluated a variety of SrI{sub 2} crystals with volumes >10 cm{sup 3}. The goal of this research was to examine the causes of energy resolution degradation for larger detectors and to determine what can be done to mitigate these effects. Testing both packaged and unpackaged detectors, we have consistently achieved better resolution with the packaged detectors. Using a collimated gamma-ray source, it was determined that better energy resolution for the packaged detectors is correlated with better light collection uniformity. A number of packaged detectors were fabricated and tested and the best spectroscopic performance was achieved for a 3% Eu doped crystal with an energy resolution of 2.93% FWHM at 662keV. Simulations of SrI{sub 2}(Eu) crystals were also performed to better understand the light transport physics in scintillators and are reported. This study has important implications for the development of SrI{sub 2}(Eu) detectors for national security purposes.

  13. Computation and volume rendering of large-scale EOF coherent modes in rotating turbulent flow data

    NASA Astrophysics Data System (ADS)

    Ostrouchov, G.; Pugmire, D.; Rosenberg, D. L.; Chen, W.; Pouquet, A.

    2013-12-01

    The computation of empirical orthogonal functions (EOF) is used to extract major coherent modes of variability in spatio-temporal data. We explore the computation of EOF in three spatial dimensions over time and present the result with volume rendering software. To accomplish this, we use an HPC extension of the R language, pbdR (see r-pbd.org), that we embed in the VisIt visualization system. VisIt provides parallel data reader capability as well as the volume rendering ability to present the computed EOFs. The data we consider derives from direct numerical simulation on a grid of 20483 points of rapidly rotating turbulent flows that are forced at intermediate scales. Injection of energy at these scales at small Rossby number (~0.04) leads to a direct cascade of energy to small scales, and an inverse cascade to large scales. We will use pdbR to examine the spatio-temporal interactions and ergodicity of waves and turbulent eddies in these flows.

  14. Controlled multibody dynamics simulation for large space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Wu, S. C.; Chang, C. W.

    1989-01-01

    Multibody dynamics discipline, and dynamic simulation in control structure interaction (CSI) design are discussed. The use, capabilities, and architecture of the Large Angle Transient Dynamics (LATDYN) code as a simulation tool are explained. A generic joint body with various types of hinge connections; finite element and element coordinate systems; results of a flexible beam spin-up on a plane; mini-mast deployment; space crane and robotic slewing manipulations; a potential CSI test article; and multibody benchmark experiments are also described.

  15. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  16. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  17. Applications of large eddy simulation methods to gyrokinetic turbulence

    SciTech Connect

    Bañón Navarro, A. Happel, T.; Teaca, B. [Applied Mathematics Research Centre, Coventry University, Coventry CV1 5FB; Max-Planck für Sonnensystemforschung, Max-Planck-Str. 2, D-37191 Katlenburg-Lindau; Max-Planck Jenko, F. [Max-Planck-Institut für Plasmaphysik, EURATOM Association, D-85748 Garching; Max-Planck Hammett, G. W. [Max-Planck Collaboration: ASDEX Upgrade Team

    2014-03-15

    The large eddy simulation (LES) approach—solving numerically the large scales of a turbulent system and accounting for the small-scale influence through a model—is applied to nonlinear gyrokinetic systems that are driven by a number of different microinstabilities. Comparisons between modeled, lower resolution, and higher resolution simulations are performed for an experimental measurable quantity, the electron density fluctuation spectrum. Moreover, the validation and applicability of LES is demonstrated through a series of diagnostics based on the free energetics of the system.

  18. Simulating Longitudinal Brain MRIs with Known Volume Changes and Realistic Variations in Image Intensity

    PubMed Central

    Khanal, Bishesh; Ayache, Nicholas; Pennec, Xavier

    2017-01-01

    This paper presents a simulator tool that can simulate large databases of visually realistic longitudinal MRIs with known volume changes. The simulator is based on a previously proposed biophysical model of brain deformation due to atrophy in AD. In this work, we propose a novel way of reproducing realistic intensity variation in longitudinal brain MRIs, which is inspired by an approach used for the generation of synthetic cardiac sequence images. This approach combines a deformation field obtained from the biophysical model with a deformation field obtained by a non-rigid registration of two images. The combined deformation field is then used to simulate a new image with specified atrophy from the first image, but with the intensity characteristics of the second image. This allows to generate the realistic variations present in real longitudinal time-series of images, such as the independence of noise between two acquisitions and the potential presence of variable acquisition artifacts. Various options available in the simulator software are briefly explained in this paper. In addition, the software is released as an open-source repository. The availability of the software allows researchers to produce tailored databases of images with ground truth volume changes; we believe this will help developing more robust brain morphometry tools. Additionally, we believe that the scientific community can also use the software to further experiment with the proposed model, and add more complex models of brain deformation and atrophy generation. PMID:28381986

  19. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  20. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  1. An instrument for collecting discrete large-volume water samples suitable for ecological studies of microorganisms

    NASA Astrophysics Data System (ADS)

    Wommack, K. Eric; Williamson, Shannon J.; Sundbergh, Arthur; Helton, Rebekah R.; Glazer, Brian T.; Portune, Kevin; Craig Cary, S.

    2004-11-01

    Microbiological investigations utilizing molecular genetic approaches to characterize microbial communities can require large volume water samples, tens to hundreds of liters. The requirement for large volume samples can be especially challenging in deep-sea hydrothermal vent environments of the oceanic ridge system. By and large studies of these environments rely on deep submergence vehicles. However collection of large volume (>100 L) water samples adjacent to the benthos is not feasible due to weight considerations. To address the technical difficulty of collecting large volume water samples from hydrothermal diffuse flow environments, a semi-autonomous large-volume water sampler (LVWS) was designed. The LVWS is capable of reliably collecting and bringing to the surface 120 L water samples from diffuse flow environments. Microscopy, molecular genetic and chemical analyses of water samples taken from 9°N East Pacific Rise are shown to demonstrate the utility of the LVWS for studies of near-benthos environments. To our knowledge this is the first report of virioplankton abundance within diffuse-flow waters of a deep-sea hydrothermal vent environment. Because of its simple design and relatively low cost, the LVWS should be applicable to a variety of studies which require large-volume water samples collected immediately adjacent to the benthos.

  2. Large-volume protein crystal growth for neutron macromolecular crystallography

    SciTech Connect

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.

  3. Large-volume protein crystal growth for neutron macromolecular crystallography

    DOE PAGES

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; ...

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for themore » growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.« less

  4. Computational fluid dynamics simulations of particle deposition in large-scale, multigenerational lung models.

    PubMed

    Walters, D Keith; Luke, William H

    2011-01-01

    Computational fluid dynamics (CFD) has emerged as a useful tool for the prediction of airflow and particle transport within the human lung airway. Several published studies have demonstrated the use of Eulerian finite-volume CFD simulations coupled with Lagrangian particle tracking methods to determine local and regional particle deposition rates in small subsections of the bronchopulmonary tree. However, the simulation of particle transport and deposition in large-scale models encompassing more than a few generations is less common, due in part to the sheer size and complexity of the human lung airway. Highly resolved, fully coupled flowfield solution and particle tracking in the entire lung, for example, is currently an intractable problem and will remain so for the foreseeable future. This paper adopts a previously reported methodology for simulating large-scale regions of the lung airway (Walters, D. K., and Luke, W. H., 2010, "A Method for Three-Dimensional Navier-Stokes Simulations of Large-Scale Regions of the Human Lung Airway," ASME J. Fluids Eng., 132(5), p. 051101), which was shown to produce results similar to fully resolved geometries using approximate, reduced geometry models. The methodology is extended here to particle transport and deposition simulations. Lagrangian particle tracking simulations are performed in combination with Eulerian simulations of the airflow in an idealized representation of the human lung airway tree. Results using the reduced models are compared with those using the fully resolved models for an eight-generation region of the conducting zone. The agreement between fully resolved and reduced geometry simulations indicates that the new method can provide an accurate alternative for large-scale CFD simulations while potentially reducing the computational cost of these simulations by several orders of magnitude.

  5. Science and engineering of large scale socio-technical simulations.

    SciTech Connect

    Barrett, C. L.; Eubank, S. G.; Marathe, M. V.; Mortveit, H. S.; Reidys, C. M.

    2001-01-01

    Computer simulation is a computational approach whereby global system properties are produced as dynamics by direct computation of interactions among representations of local system elements. A mathematical theory of simulation consists of an account of the formal properties of sequential evaluation and composition of interdependent local mappings. When certain local mappings and their interdependencies can be related to particular real world objects and interdependencies, it is common to compute the interactions to derive a symbolic model of the global system made up of the corresponding interdependent objects. The formal mathematical and computational account of the simulation provides a particular kind of theoretical explanation of the global system properties and, therefore, insight into how to engineer a complex system to exhibit those properties. This paper considers the methematical foundations and engineering princaples necessary for building large scale simulations of socio-technical systems. Examples of such systems are urban regional transportation systems, the national electrical power markets and grids, the world-wide Internet, vaccine design and deployment, theater war, etc. These systems are composed of large numbers of interacting human, physical and technological components. Some components adapt and learn, exhibit perception, interpretation, reasoning, deception, cooperation and noncooperation, and have economic motives as well as the usual physical properties of interaction. The systems themselves are large and the behavior of sociotechnical systems is tremendously complex. The state of affairs f o r these kinds of systems is characterized by very little satisfactory formal theory, a good decal of very specialized knowledge of subsystems, and a dependence on experience-based practitioners' art. However, these systems are vital and require policy, control, design, implementation and investment. Thus there is motivation to improve the ability to

  6. Large-Scale Simulation of Nuclear Reactors: Issues and Perspectives

    SciTech Connect

    Merzari, Elia; Obabko, Aleks; Fischer, Paul; Halford, Noah; Walker, Justin; Siegel, Andrew; Yu, Yiqi

    2015-01-01

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems. These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. The focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.

  7. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  8. Toward the large-eddy simulation of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Speziale, C. G.; Zang, T. A.

    1990-01-01

    New subgrid-scale models for the large-eddy simulation of compressible turbulent flows are developed and tested based on the Favre-filtered equations of motion for an ideal gas. A compressible generalization of the linear combination of the Smagorinsky model and scale-similarity model, in terms of Favre-filtered fields, is obtained for the subgrid-scale stress tensor. An analogous thermal linear combination model is also developed for the subgrid-scale heat flux vector. The two dimensionless constants associated with these subgrid-scale models are obtained by correlating with the results of direct numerical simulations of compressible isotropic turbulence performed on a 96(exp 3) grid using Fourier collocation methods. Extensive comparisons between the direct and modeled subgrid-scale fields are provided in order to validate the models. A large-eddy simulation of the decay of compressible isotropic turbulence (conducted on a coarse 32(exp 3) grid) is shown to yield results that are in excellent agreement with the fine grid direct simulation. Future applications of these compressible subgrid-scale models to the large-eddy simulation of more complex supersonic flows are discussed briefly.

  9. Large eddy simulation of the atmosphere on various scales.

    PubMed

    Cullen, M J P; Brown, A R

    2009-07-28

    Numerical simulations of the atmosphere are routinely carried out on various scales for purposes ranging from weather forecasts for local areas a few hours ahead to forecasts of climate change over periods of hundreds of years. Almost without exception, these forecasts are made with space/time-averaged versions of the governing Navier-Stokes equations and laws of thermodynamics, together with additional terms representing internal and boundary forcing. The calculations are a form of large eddy modelling, because the subgrid-scale processes have to be modelled. In the global atmospheric models used for long-term predictions, the primary method is implicit large eddy modelling, using discretization to perform the averaging, supplemented by specialized subgrid models, where there is organized small-scale activity, such as in the lower boundary layer and near active convection. Smaller scale models used for local or short-range forecasts can use a much smaller averaging scale. This allows some of the specialized subgrid models to be dropped in favour of direct simulations. In research mode, the same models can be run as a conventional large eddy simulation only a few orders of magnitude away from a direct simulation. These simulations can then be used in the development of the subgrid models for coarser resolution models.

  10. NASA's Large-Eddy Simulation Research for Jet Noise Applications

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2009-01-01

    Research into large-eddy simulation (LES) for application to jet noise is described. The LES efforts include in-house code development and application at NASA Glenn along with NASA Research Announcement sponsored work at Stanford University and Florida State University. Details of the computational methods used and sample results for jet flows are provided.

  11. Mind the gap: a guideline for large eddy simulation.

    PubMed

    George, William K; Tutkun, Murat

    2009-07-28

    This paper briefly reviews some of the fundamental ideas of turbulence as they relate to large eddy simulation (LES). Of special interest is how our thinking about the so-called 'spectral gap' has evolved over the past decade, and what this evolution implies for LES applications.

  12. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  13. Toward the large-eddy simulations of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Speziale, C. G.; Zang, T. A.

    1987-01-01

    New subgrid-scale models for the large-eddy simulation of compressible turbulent flows are developed based on the Favre-filtered equations of motion for an ideal gas. A compressible generalization of the linear combination of the Smagorinsky model and scale-similarity model (in terms of Favre-filtered fields) is obtained for the subgrid-scale stress tensor. An analogous thermal linear combination model is also developed for the subgrid-scale heat flux vector. The three dimensionless constants associated with these subgrid-scale models are obtained by correlating with the results of direct numerical simulations of compressible isotropic turbulence performed on a 96 to the third power grid using Fourier collocation methods. Extensive comparisons between the direct and modeled subgrid-scale fields are provided in order to validate the models. Future applications of these compressible subgrid-scale models to the large-eddy simulation of supersonic aerodynamic flows are discussed briefly.

  14. Sand tank experiment of a large volume biodiesel spill

    NASA Astrophysics Data System (ADS)

    Scully, K.; Mayer, K. U.

    2015-12-01

    Although petroleum hydrocarbon releases in the subsurface have been well studied, the impacts of subsurface releases of highly degradable alternative fuels, including biodiesel, are not as well understood. One concern is the generation of CH4­ which may lead to explosive conditions in underground structures. In addition, the biodegradation of biodiesel consumes O2 that would otherwise be available for the degradation of petroleum hydrocarbons that may be present at a site. Until now, biodiesel biodegradation in the vadose zone has not been examined in detail, despite being critical to understanding the full impact of a release. This research involves a detailed study of a laboratory release of 80 L of biodiesel applied at surface into a large sandtank to examine the progress of biodegradation reactions. The experiment will monitor the onset and temporal evolution of CH4 generation to provide guidance for site monitoring needs following a biodiesel release to the subsurface. Three CO2 and CH4 flux chambers have been deployed for long term monitoring of gas emissions. CO2 fluxes have increased in all chambers over the 126 days since the start of the experiment. The highest CO2 effluxes are found directly above the spill and have increased from < 0.5 μmol m-2 s-1 to ~3.8 μmol m-2 s-1, indicating an increase in microbial activity. There were no measurable CH4 fluxes 126 days into the experiment. Sensors were emplaced to continuously measure O2, CO2, moisture content, matric potential, EC, and temperature. In response to the release, CO2 levels have increased across all sensors, from an average value of 0.1% to 0.6% 126 days after the start of the experiment, indicating the rapid onset of biodegradation. The highest CO2 values observed from samples taken in the gas ports were 2.5%. Average O2 concentrations have decreased from 21% to 17% 126 days after the start of the experiment. O2 levels in the bottom central region of the sandtank declined to approximately 12%.

  15. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks.

    PubMed

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-11-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors' best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.

  16. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks

    PubMed Central

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-01-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well. PMID:27809285

  17. Mathematical simulation of power conditioning systems. Volume 1: Simulation of elementary units. Report on simulation methodology

    NASA Technical Reports Server (NTRS)

    Prajous, R.; Mazankine, J.; Ippolito, J. C.

    1978-01-01

    Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.

  18. Large Variations in Ice Volume During the Middle Eocene "Doubthouse"

    NASA Astrophysics Data System (ADS)

    Dawber, C. F.; Tripati, A. K.

    2008-12-01

    The onset of glacial conditions in the Cenozoic is widely held to have begun ~34 million years ago, coincident with the Eocene-Oligocene boundary1. Warm and high pCO2 'greenhouse' intervals such as the Eocene are generally thought to be ice-free2. Yet the sequence stratigraphic record supports the occurrence of high-frequency sea-level change of tens of meters in the Middle and Late Eocene3, and large calcite and seawater δ18O excursions (~0.5-1.0 permil) have been reported in foraminifera from open ocean sediments4. As a result, the Middle Eocene is often considered the intermediary "doubthouse". The extent of continental ice during the 'doubthouse' is controversial, with estimates of glacioeustatic sea level fall ranging from 30 to 125m2,3,5. We present a new δ18Osw reconstruction for Ocean Drilling Project (ODP) Site 1209 in the tropical Pacific Ocean. It is the first continuous high-resolution record for an open-ocean site that is not directly influenced by changes in the carbonate compensation depth, which enables us to circumvent many of the limitations of existing records. Our record shows increases of 0.8 ± 0.2 (1 s.e) permil and 1.1 ± 0.2 permil at ~44-45 and ~42-41 Ma respectively, which suggests glacioeustatic sea level variations of ~90 m during the Middle Eocene. Modelling studies have shown that fully glaciating Antarctica during the Eocene should drive a change in seawater (δ18Osw) of 0.45 permil, and lower sea level by ~55 m6. Our results therefore support significant ice storage in both the Northern and Southern Hemisphere during the Middle Eocene 'doubthouse'. 1.Miller, Kenneth G. et al., 1990, Eocene-Oligocene sea-level changes in the New Jersey coastal plain linked to the deep-sea record. Geological Society of America Bulletin 102, 331-339 2.Pagani, M. et al., 2005, Marked decline in atmospheric carbon dioxide concentrations during the Paleogene. Science 309 (5734), 600-603. 3.Browning, J., Miller, K., and Pak, D., 1996, Global implications

  19. Statistics of LES Simulations of Large Wind Farms

    NASA Astrophysics Data System (ADS)

    Juhl Andersen, Søren; Nørkær Sørensen, Jens; Mikkelsen, Robert; Ivanell, Stefan

    2016-09-01

    Numerous large eddy simulations are performed of large wind farms using the actuator line method, which has been fully coupled to the aero-elastic code, Flex5. The higher order moments of the flow field inside large wind farms is examined in order to determine a representative reference velocity. The statistical moments appear to collapse and hence the turbulence inside large wind farms can potentially be scaled accordingly. The thrust coefficient is estimated by two different reference velocities and the generic CT expression by Frandsen. A reference velocity derived from the power production is shown to give very good agreement and furthermore enables the very good estimation of the thrust force using only the steady CT -curve, even for very short time samples. Finally, the effective turbulence inside large wind farms and the equivalent loads are examined.

  20. Large-Eddy Simulations of Flows in Complex Terrain

    NASA Astrophysics Data System (ADS)

    Kosovic, B.; Lundquist, K. A.

    2011-12-01

    Large-eddy simulation as a methodology for numerical simulation of turbulent flows was first developed to study turbulent flows in atmospheric by Lilly (1967). The first LES were carried by Deardorff (1970) who used these simulations to study atmospheric boundary layers. Ever since, LES has been extensively used to study canonical atmospheric boundary layers, in most cases flat plate boundary layers under the assumption of horizontal homogeneity. Carefully designed LES of canonical convective and neutrally stratified and more recently stably stratified atmospheric boundary layers have contributed significantly to development of better understanding of these flows and their parameterizations in large scale models. These simulations were often carried out using codes specifically designed and developed for large-eddy simulations of horizontally homogeneous flows with periodic lateral boundary conditions. Recent developments in multi-scale numerical simulations of atmospheric flows enable numerical weather prediction (NWP) codes such as ARPS (Chow and Street, 2009), COAMPS (Golaz et al., 2009) and Weather Research and Forecasting model, to be used nearly seamlessly across a wide range of atmospheric scales from synoptic down to turbulent scales in atmospheric boundary layers. Before we can with confidence carry out multi-scale simulations of atmospheric flows, NWP codes must be validated for accurate performance in simulating flows over complex or inhomogeneous terrain. We therefore carry out validation of WRF-LES for simulations of flows over complex terrain using data from Askervein Hill (Taylor and Teunissen, 1985, 1987) and METCRAX (Whiteman et al., 2008) field experiments. WRF's nesting capability is employed with a one-way nested inner domain that includes complex terrain representation while the coarser outer nest is used to spin up fully developed atmospheric boundary layer turbulence and thus represent accurately inflow to the inner domain. LES of a

  1. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  2. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  3. Domain nesting for multi-scale large eddy simulation

    NASA Astrophysics Data System (ADS)

    Fuka, Vladimir; Xie, Zheng-Tong

    2016-04-01

    The need to simulate city scale areas (O(10 km)) with high resolution within street canyons in certain areas of interests necessitates different grid resolutions in different part of the simulated area. General purpose computational fluid dynamics codes typically employ unstructured refined grids while mesoscale meteorological models more often employ nesting of computational domains. ELMM is a large eddy simulation model for the atmospheric boundary layer. It employs orthogonal uniform grids and for this reason domain nesting was chosen as the approach for simulations in multiple scales. Domains are implemented as sets of MPI processes which communicate with each other as in a normal non-nested run, but also with processes from another (outer/inner) domain. It should stressed that the duration of solution of time-steps in the outer and in the inner domain must be synchronized, so that the processes do not have to wait for the completion of their boundary conditions. This can achieved by assigning an appropriate number of CPUs to each domain, and to gain high efficiency. When nesting is applied for large eddy simulation, the inner domain receives inflow boundary conditions which lack turbulent motions not represented by the outer grid. ELMM remedies this by optional adding of turbulent fluctuations to the inflow using the efficient method of Xie and Castro (2008). The spatial scale of these fluctuations is in the subgrid-scale of the outer grid and their intensity will be estimated from the subgrid turbulent kinetic energy in the outer grid.

  4. Publicly Releasing a Large Simulation Dataset with NDS Labs

    NASA Astrophysics Data System (ADS)

    Goldbaum, Nathan

    2016-03-01

    Optimally, all publicly funded research should be accompanied by the tools, code, and data necessary to fully reproduce the analysis performed in journal articles describing the research. This ideal can be difficult to attain, particularly when dealing with large (>10 TB) simulation datasets. In this lightning talk, we describe the process of publicly releasing a large simulation dataset to accompany the submission of a journal article. The simulation was performed using Enzo, an open source, community-developed N-body/hydrodynamics code and was analyzed using a wide range of community- developed tools in the scientific Python ecosystem. Although the simulation was performed and analyzed using an ecosystem of sustainably developed tools, we enable sustainable science using our data by making it publicly available. Combining the data release with the NDS Labs infrastructure allows a substantial amount of added value, including web-based access to analysis and visualization using the yt analysis package through an IPython notebook interface. In addition, we are able to accompany the paper submission to the arXiv preprint server with links to the raw simulation data as well as interactive real-time data visualizations that readers can explore on their own or share with colleagues during journal club discussions. It is our hope that the value added by these services will substantially increase the impact and readership of the paper.

  5. Finecasting for renewable energy with large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Jonker, Harmen; Verzijlbergh, Remco

    2016-04-01

    We present results of a single, continuous Large-Eddy Simulation of actual weather conditions during the timespan of a full year, made possible through recent computational developments (Schalkwijk et al, MWR, 2015). The simulation is coupled to a regional weather model in order to provide an LES dataset that is representative of the daily weather of the year 2012 around Cabauw, the Netherlands. This location is chosen such that LES results can be compared with both the regional weather model and observations from the Cabauw observational supersite. The run was made possible by porting our Large-Eddy Simulation program to run completely on the GPU (Schalkwijk et al, BAMS, 2012). GPU adaptation allows us to reach much improved time-to-solution ratios (i.e. simulation speedup versus real time). As a result, one can perform runs with a much longer timespan than previously feasible. The dataset resulting from the LES run provides many avenues for further study. First, it can provide a more statistical approach to boundary-layer turbulence than the more common case-studies by simulating a diverse but representative set of situations, as well as the transition between situations. This has advantages in designing and evaluating parameterizations. In addition, we discuss the opportunities of high-resolution forecasts for the renewable energy sector, e.g. wind and solar energy production.

  6. Toward large eddy simulation of turbulent flow over an airfoil

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon

    1993-01-01

    The flow field over an airfoil contains several distinct flow characteristics, e.g. laminar, transitional, turbulent boundary layer flow, flow separation, unstable free shear layers, and a wake. This diversity of flow regimes taxes the presently available Reynolds averaged turbulence models. Such models are generally tuned to predict a particular flow regime, and adjustments are necessary for the prediction of a different flow regime. Similar difficulties are likely to emerge when the large eddy simulation technique is applied with the widely used Smagorinsky model. This model has not been successful in correctly representing different turbulent flow fields with a single universal constant and has an incorrect near-wall behavior. Germano et al. (1991) and Ghosal, Lund & Moin have developed a new subgrid-scale model, the dynamic model, which is very promising in alleviating many of the persistent inadequacies of the Smagorinsky model: the model coefficient is computed dynamically as the calculation progresses rather than input a priori. The model has been remarkably successful in prediction of several turbulent and transitional flows. We plan to simulate turbulent flow over a '2D' airfoil using the large eddy simulation technique. Our primary objective is to assess the performance of the newly developed dynamic subgrid-scale model for computation of complex flows about aircraft components and to compare the results with those obtained using the Reynolds average approach and experiments. The present computation represents the first application of large eddy simulation to a flow of aeronautical interest and a key demonstration of the capabilities of the large eddy simulation technique.

  7. Large-eddy simulation of trans- and supercritical injection

    NASA Astrophysics Data System (ADS)

    Müller, H.; Niedermeier, C. A.; Jarczyk, M.; Pfitzner, M.; Hickel, S.; Adams, N. A.

    2016-07-01

    In a joint effort to develop a robust numerical tool for the simulation of injection, mixing, and combustion in liquid rocket engines at high pressure, a real-gas thermodynamics model has been implemented into two computational fluid dynamics (CFD) codes, the density-based INCA and a pressure-based version of OpenFOAM. As a part of the validation process, both codes have been used to perform large-eddy simulations (LES) of trans- and supercritical nitrogen injection. Despite the different code architecture and the different subgrid scale turbulence modeling strategy, both codes yield similar results. The agreement with the available experimental data is good.

  8. Model consistency in large eddy simulation of turbulent channel flows

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Ferziger, Joel H.; Moin, Parviz

    1988-01-01

    Combinations of filters and subgrid scale stress models for large eddy simulation of the Navier-Stokes equations are examined by a priori tests and numerical simulations. The structure of the subgrid scales is found to depend strongly on the type of filter used, and consistency between model and filter is essential to ensure accurate results. The implementation of consistent combinations of filter and model gives more accurate turbulence statistics than those obtained in previous investigations in which the models were chosen independently from the filter. Results and limitations of the a priori test are discussed. The effect of grid refinement is also examined.

  9. Laminar flow transition: A large-eddy simulation approach

    NASA Technical Reports Server (NTRS)

    Biringen, S.

    1982-01-01

    A vectorized, semi-implicit code was developed for the solution of the time-dependent, three dimensional equations of motion in plane Poiseuille flow by the large-eddy simulation technique. The code is tested by comparing results with those obtained from the solutions of the Orr-Sommerfeld equation. Comparisons indicate that finite-differences employed along the cross-stream direction act as an implicit filter. This removes the necessity of explicit filtering along this direction (where a nonhomogeneous mesh is used) for the simulation of laminar flow transition into turbulence in which small scale turbulence will be accounted for by a subgrid scale turbulence model.

  10. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  11. Simulating the large-scale structure of HI intensity maps

    SciTech Connect

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel E-mail: aseem@iucaa.in E-mail: alexandre.refregier@phys.ethz.ch E-mail: joel.akeret@phys.ethz.ch

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 2048{sup 3} particles (particle mass 1.6 × 10{sup 11} M{sub ⊙} / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (10{sup 8} M{sub ⊙} / h < M{sub halo} < 10{sup 13} M{sub ⊙} / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 ∼< z ∼< 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  12. Center-stabilized Yang-Mills Theory:Confinement and Large N Volume Independence

    SciTech Connect

    Unsal, Mithat; Yaffe, Laurence G.; /Washington U., Seattle

    2008-03-21

    We examine a double trace deformation of SU(N) Yang-Mills theory which, for large N and large volume, is equivalent to unmodified Yang-Mills theory up to O(1/N{sup 2}) corrections. In contrast to the unmodified theory, large N volume independence is valid in the deformed theory down to arbitrarily small volumes. The double trace deformation prevents the spontaneous breaking of center symmetry which would otherwise disrupt large N volume independence in small volumes. For small values of N, if the theory is formulated on R{sup 3} x S{sup 1} with a sufficiently small compactification size L, then an analytic treatment of the non-perturbative dynamics of the deformed theory is possible. In this regime, we show that the deformed Yang-Mills theory has a mass gap and exhibits linear confinement. Increasing the circumference L or number of colors N decreases the separation of scales on which the analytic treatment relies. However, there are no order parameters which distinguish the small and large radius regimes. Consequently, for small N the deformed theory provides a novel example of a locally four-dimensional pure gauge theory in which one has analytic control over confinement, while for large N it provides a simple fully reduced model for Yang-Mills theory. The construction is easily generalized to QCD and other QCD-like theories.

  13. Two-fluid biasing simulations of the large plasma device

    NASA Astrophysics Data System (ADS)

    Fisher, Dustin M.; Rogers, Barrett N.

    2017-02-01

    External biasing of the Large Plasma Device (LAPD) and its impact on plasma flows and turbulence are explored for the first time in 3D simulations using the Global Braginskii Solver code. Without external biasing, the LAPD plasma spontaneously rotates in the ion diamagnetic direction. The application of a positive bias increases the plasma rotation in the simulations, which show the emergence of a coherent Kelvin Helmholtz (KH) mode outside of the cathode edge with poloidal mode number m ≃ 6 . Negative biasing reduces the rotation in the simulations, which exhibit KH turbulence modestly weaker than but otherwise similar to unbiased simulations. Biasing either way, but especially positively, forces the plasma potential inside the cathode edge to a spatially constant, KH-stable profile, leading to a more quiescent core plasma than the unbiased case. A moderate increase in plasma confinement and an associated steepening of the profiles are seen in the biasing runs. The simulations thus show that the application of external biasing can improve confinement while also driving a Kelvin-Helmholtz instability. Ion-neutral collisions have only a weak effect in the biased or unbiased simulations.

  14. Process control of large-scale finite element simulation software

    SciTech Connect

    Spence, P.A.; Weingarten, L.I.; Schroder, K.; Tung, D.M.; Sheaffer, D.A.

    1996-02-01

    We have developed a methodology for coupling large-scale numerical codes with process control algorithms. Closed-loop simulations were demonstrated using the Sandia-developed finite element thermal code TACO and the commercially available finite element thermal-mechanical code ABAQUS. This new capability enables us to use computational simulations for designing and prototyping advanced process-control systems. By testing control algorithms on simulators before building and testing hardware, enormous time and cost savings can be realized. The need for a closed-loop simulation capability was demonstrated in a detailed design study of a rapid-thermal-processing reactor under development by CVC Products Inc. Using a thermal model of the RTP system as a surrogate for the actual hardware, we were able to generate response data needed for controller design. We then evaluated the performance of both the controller design and the hardware design by using the controller to drive the finite element model. The controlled simulations provided data on wafer temperature uniformity as a function of ramp rate, temperature sensor locations, and controller gain. This information, which is critical to reactor design, cannot be obtained from typical open-loop simulations.

  15. Plasma volume losses during simulated weightlessness in women

    SciTech Connect

    Drew, H.; Fortney, S.; La France, N.; Wagner, H.N. Jr.

    1985-05-01

    Six healthy women not using oral contraceptives underwent two 11-day intervals of complete bedrest (BR) with the BR periods separated by 4 weeks of ambulatory control. Change in plasma volume (PV) was monitored during BR to test the hypothesis that these women would show a smaller decrease in PV than PV values reported in similarly stressed men due to the water retaining effects of the female hormones. Bedrest periods were timed to coincide with opposing stages of the menstrual cycle in each woman. The menstrual cycle was divided into 4 separate stages; early follicular, ovulatory, early luteal, and late luteal phases. The percent decrease of PV showed a consistent decrease for each who began BR while in stage 1, 3 or 4 of the menstrual cycle. However, the females who began in stage 2 showed a transient attenuation in PV loss. Overall, PV changes seen in women during BR were similar to those reported for men. The water-retaining effects of menstrual hormones were evident only during the high estrogen ovulatory stage. The authors conclude the protective effects of menstrual hormones on PV losses during simulated weightless conditions appear to be only small and transient.

  16. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    NASA Technical Reports Server (NTRS)

    Oefelein, Joseph C.; Garcia, Roberto (Technical Monitor)

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  17. Large-eddy simulation using the finite element method

    SciTech Connect

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.; Kollmann, W.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulence modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.

  18. Lightweight computational steering of very large scale molecular dynamics simulations

    SciTech Connect

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

  19. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  20. Large Eddy Simulations of Severe Convection Induced Turbulence

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.

  1. Large eddy simulation of a wing-body junction flow

    NASA Astrophysics Data System (ADS)

    Ryu, Sungmin; Emory, Michael; Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca

    2014-11-01

    We present numerical simulations of the wing-body junction flow experimentally investigated by Devenport & Simpson (1990). Wall-junction flows are common in engineering applications but relevant flow physics close to the corner region is not well understood. Moreover, performance of turbulence models for the body-junction case is not well characterized. Motivated by the insufficient investigations, we have numerically investigated the case with Reynolds-averaged Naiver-Stokes equation (RANS) and Large Eddy Simulation (LES) approaches. The Vreman model applied for the LES and SST k- ω model for the RANS simulation are validated focusing on the ability to predict turbulence statistics near the junction region. Moreover, a sensitivity study of the form of the Vreman model will also be presented. This work is funded under NASA Cooperative Agreement NNX11AI41A (Technical Monitor Dr. Stephen Woodruff)

  2. Large-Eddy Simulations of Dust Devils and Convective Vortices

    NASA Astrophysics Data System (ADS)

    Spiga, Aymeric; Barth, Erika; Gu, Zhaolin; Hoffmann, Fabian; Ito, Junshi; Jemmett-Smith, Bradley; Klose, Martina; Nishizawa, Seiya; Raasch, Siegfried; Rafkin, Scot; Takemi, Tetsuya; Tyler, Daniel; Wei, Wei

    2016-11-01

    In this review, we address the use of numerical computations called Large-Eddy Simulations (LES) to study dust devils, and the more general class of atmospheric phenomena they belong to (convective vortices). We describe the main elements of the LES methodology. We review the properties, statistics, and variability of dust devils and convective vortices resolved by LES in both terrestrial and Martian environments. The current challenges faced by modelers using LES for dust devils are also discussed in detail.

  3. Cosmological fluid mechanics with adaptively refined large eddy simulations

    NASA Astrophysics Data System (ADS)

    Schmidt, W.; Almgren, A. S.; Braun, H.; Engels, J. F.; Niemeyer, J. C.; Schulz, J.; Mekuria, R. R.; Aspden, A. J.; Bell, J. B.

    2014-06-01

    We investigate turbulence generated by cosmological structure formation by means of large eddy simulations using adaptive mesh refinement. In contrast to the widely used implicit large eddy simulations, which resolve a limited range of length-scales and treat the effect of turbulent velocity fluctuations below the grid scale solely by numerical dissipation, we apply a subgrid-scale model for the numerically unresolved fraction of the turbulence energy. For simulations with adaptive mesh refinement, we utilize a new methodology that allows us to adjust the scale-dependent energy variables in such a way that the sum of resolved and unresolved energies is globally conserved. We test our approach in simulations of randomly forced turbulence, a gravitationally bound cloud in a wind, and the Santa Barbara cluster. To treat inhomogeneous turbulence, we introduce an adaptive Kalman filtering technique that separates turbulent velocity fluctuations on resolved length-scales from the non-turbulent bulk flow. From the magnitude of the fluctuating component and the subgrid-scale turbulence energy, a total turbulent velocity dispersion of several 100 km s-1 is obtained for the Santa Barbara cluster, while the low-density gas outside the accretion shocks is nearly devoid of turbulence. The energy flux through the turbulent cascade and the dissipation rate predicted by the subgrid-scale model correspond to dynamical time-scales around 5 Gyr, independent of numerical resolution.

  4. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  5. Simulation requirements for the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Soosaar, K.

    1984-01-01

    Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.

  6. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach.

    PubMed

    Zeng, Xiaozheng; McGough, Robert J

    2009-05-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters.

  7. A high resolution finite volume method for efficient parallel simulation of casting processes on unstructured meshes

    SciTech Connect

    Kothe, D.B.; Turner, J.A.; Mosso, S.J.; Ferrell, R.C.

    1997-03-01

    We discuss selected aspects of a new parallel three-dimensional (3-D) computational tool for the unstructured mesh simulation of Los Alamos National Laboratory (LANL) casting processes. This tool, known as {bold Telluride}, draws upon on robust, high resolution finite volume solutions of metal alloy mass, momentum, and enthalpy conservation equations to model the filling, cooling, and solidification of LANL castings. We briefly describe the current {bold Telluride} physical models and solution methods, then detail our parallelization strategy as implemented with Fortran 90 (F90). This strategy has yielded straightforward and efficient parallelization on distributed and shared memory architectures, aided in large part by new parallel libraries {bold JTpack9O} for Krylov-subspace iterative solution methods and {bold PGSLib} for efficient gather/scatter operations. We illustrate our methodology and current capabilities with source code examples and parallel efficiency results for a LANL casting simulation.

  8. Evaluation of Cloud, Grid and HPC resources for big volume and variety of RCM simulations

    NASA Astrophysics Data System (ADS)

    Blanco, Carlos; Cofino, Antonio S.; Fernández, Valvanuz; Fernández, Jesús

    2016-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Regional Climate Model (RCM) community. These paradigms are modifying the way how RCM applications are being executed. By using these technologies the number, variety and complexity of experiments and resources used by RCMs simulations are increasing substantially. But, although computational capacity is increasing, traditional apps and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to execute RCMs in Grid, Cloud and HPC resources and how to tackle them. For this purpose, WRF model will be used as well known representative application for RCM simulations. Grid and Cloud infrastructures provided by EGI's VOs (esr, earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. And as a solution to those challenges we will use the WRF4G framework, which provides a good framework to manage big volume and variety of computing resources for climate simulation experiments. This work is partially funded by "Programa de Personal Investigador en Formación Predoctoral" from Universidad de Cantabria, co-funded by the Regional Government of Cantabria.

  9. Direct Molecular Simulation of Gradient-Driven Diffusion of Large Molecules using Constant Pressure

    SciTech Connect

    Heffelfinger, G.S.; Thompson, A.P.

    1998-12-23

    Dual control volume grand canonical molecular dynamics (DCV-GCMD) is a boundary-driven non-equilibrium molecular dynamics technique for simulating gradient driven diffusion in multi-component systems. Two control volumes are established at opposite ends of the simulation box. Constant temperature and chemical potential of diffusing species are imposed in the control volumes. This results in stable chemical potential gradients and steady-state diffusion fluxes in the region between the control volumes. We present results and detailed analysis for a new constant-pressure variant of the DCV-GCMD method in which one of the diffusing species for which a steady-state diffusion flux exists does not have to be inserted or deIeted. Constant temperature, pressure and chemical potential of all diffusing species except one are imposed in the control volumes. The constant-pressure method can be applied to situations in which insertion and deletion of large molecules would be prohibitively difficult. As an exampIe, we used the method to shnulate diffusion in a biruuy mixture of spherical particles with a 2:1 size ratio. Steady-state diffusion fluxes of both diffbsi.ng species were established. The constant-pressure diffision coefficients agreed closely with the results of the standard constant-volume calculations. In addition, we show how the concentration, chemical potential and flux profiles can be used to calculate kwd binary and Maxwell-Stefim diffusion coefficients. In the case of the 2:1 size ratio mixture, we found that the binary dlffision coefficients were asymmetric and composition dependent, whereas the Maxwell-Stefan diffision coefficients changed very little with composition and were symmetric. This last result verified that the Gibbs-Duhem relation was satisfied locally, thus validating the assumption of local equilibrium.

  10. Exposing earth surface process model simulations to a large audience

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical

  11. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  12. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Technical Reports Server (NTRS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-01-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an

  13. Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach

    NASA Technical Reports Server (NTRS)

    Menon, S.; Feiz, H.

    1990-01-01

    Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.

  14. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    SciTech Connect

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-07-28

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  15. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, G.D.

    1998-11-24

    Microwave injection methods are disclosed for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant ``volume`` ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources. 5 figs.

  16. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, Gerald D.

    1998-01-01

    Microwave injection methods for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant "volume" ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources.

  17. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  18. Large-scale lattice-Boltzmann simulations over lambda networks

    NASA Astrophysics Data System (ADS)

    Saksena, R.; Coveney, P. V.; Pinning, R.; Booth, S.

    Amphiphilic molecules are of immense industrial importance, mainly due to their tendency to align at interfaces in a solution of immiscible species, e.g., oil and water, thereby reducing surface tension. Depending on the concentration of amphiphiles in the solution, they may assemble into a variety of morphologies, such as lamellae, micelles, sponge and cubic bicontinuous structures exhibiting non-trivial rheological properties. The main objective of this work is to study the rheological properties of very large, defect-containing gyroidal systems (of up to 10243 lattice sites) using the lattice-Boltzmann method. Memory requirements for the simulation of such large lattices exceed that available to us on most supercomputers and so we use MPICH-G2/MPIg to investigate geographically distributed domain decomposition simulations across HPCx in the UK and TeraGrid in the US. Use of MPICH-G2/MPIg requires the port-forwarder to work with the grid middleware on HPCx. Data from the simulations is streamed to a high performance visualisation resource at UCL (London) for rendering and visualisation. Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project March 26-28 2007 The George Hotel, Edinburgh, UK

  19. Large meteoroid's impact damage: review of available impact hazard simulators

    NASA Astrophysics Data System (ADS)

    Moreno-Ibáñez, M.; Gritsevich, M.; Trigo-Rodríguez, J. M.

    2016-01-01

    The damage caused by meter-sized meteoroids encountering the Earth is expected to be severe. Meteor-sized objects in heliocentric orbits can release energies higher than 108 J either in the upper atmosphere through an energetic airblast or, if reaching the surface, their impact may create a crater, provoke an earthquake or start up a tsunami. A limited variety of cases has been observed in the recent past (e.g. Tunguska, Carancas or Chelyabinsk). Hence, our knowledge has to be constrained with the help of theoretical studies and numerical simulations. There are several simulation programs which aim to forecast the impact consequences of such events. We have tested them using the recent case of the Chelyabinsk superbolide. Particularly, Chelyabinsk belongs to the ten to hundred meter-sized objects which constitute the main source of risk to Earth given the current difficulty in detecting them in advance. Furthermore, it was a detailed documented case, thus allowing us to properly check the accuracy of the studied simulators. As we present, these open simulators provide a first approximation of the impact consequences. However, all of them fail to accurately determine the caused damage. We explain the observed discrepancies between the observed and simulated consequences with the following consideration. The large amount of unknown properties of the potential impacting meteoroid, the atmospheric conditions, the flight dynamics and the uncertainty in the impact point itself hinder any modelling task. This difficulty can be partially overcome by reducing the number of unknowns using dimensional analysis and scaling laws. Despite the description of physical processes associated with atmospheric entry could be still further improved, we conclude that such approach would significantly improve the efficiency of the simulators.

  20. Large breast compressions: Observations and evaluation of simulations

    SciTech Connect

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J.

    2011-02-15

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  1. Large Eddy Simulation of a Cavitating Multiphase Flow for Liquid Injection

    NASA Astrophysics Data System (ADS)

    Cailloux, M.; Helie, J.; Reveillon, J.; Demoulin, F. X.

    2015-12-01

    This paper presents a numerical method for modelling a compressible multiphase flow that involves phase transition between liquid and vapour in the context of gasoline injection. A discontinuous compressible two fluid mixture based on the Volume of Fluid (VOF) implementation is employed to represent the phases of liquid, vapour and air. The mass transfer between phases is modelled by standard models such as Kunz or Schnerr-Sauer but including the presence of air in the gas phase. Turbulence is modelled using a Large Eddy Simulation (LES) approach to catch instationnarities and coherent structures. Eventually the modelling approach matches favourably experimental data concerning the effect of cavitation on atomisation process.

  2. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  3. Numerical methods for large eddy simulation of acoustic combustion instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton T.

    Acoustic combustion instabilities occur when interaction between the combustion process and acoustic modes in a combustor results in periodic oscillations in pressure, velocity, and heat release. If sufficiently large in amplitude, these instabilities can cause operational difficulties or the failure of combustor hardware. In many situations, the dominant instability is the result of the interaction between a low frequency acoustic mode of the combustor and the large scale hydrodynamics. Large eddy simulation (LES), therefore, is a promising tool for the prediction of these instabilities, since both the low frequency acoustic modes and the large scale hydrodynamics are well resolved in LES. Problems with the tractability of such simulations arise, however, due to the difficulty of solving the compressible Navier-Stokes equations efficiently at low Mach number and due to the large number of acoustic periods that are often required for such instabilities to reach limit cycles. An implicit numerical method for the solution of the compressible Navier-Stokes equations has been developed which avoids the acoustic CFL restriction, allowing for significant efficiency gains at low Mach number, while still resolving the low frequency acoustic modes of interest. In the limit of a uniform grid the numerical method causes no artificial damping of acoustic waves. New, non-reflecting boundary conditions have also been developed for use with the characteristic-based approach of Poinsot and Lele (1992). The new boundary conditions are implemented in a manner which allows for significant reduction of the computational domain of an LES by eliminating the need to perform LES in regions where one-dimensional acoustics significantly affect the instability but details of the hydrodynamics do not. These new numerical techniques have been demonstrated in an LES of an experimental combustor. The new techniques are shown to be an efficient means of performing LES of acoustic combustion

  4. Hydrothermal fluid flow and deformation in large calderas: Inferences from numerical simulations

    USGS Publications Warehouse

    Hurwitz, S.; Christiansen, L.B.; Hsieh, P.A.

    2007-01-01

    Inflation and deflation of large calderas is traditionally interpreted as being induced by volume change of a discrete source embedded in an elastic or viscoelastic half-space, though it has also been suggested that hydrothermal fluids may play a role. To test the latter hypothesis, we carry out numerical simulations of hydrothermal fluid flow and poroelastic deformation in calderas by coupling two numerical codes: (1) TOUGH2 [Pruess et al., 1999], which simulates flow in porous or fractured media, and (2) BIOT2 [Hsieh, 1996], which simulates fluid flow and deformation in a linearly elastic porous medium. In the simulations, high-temperature water (350??C) is injected at variable rates into a cylinder (radius 50 km, height 3-5 km). A sensitivity analysis indicates that small differences in the values of permeability and its anisotropy, the depth and rate of hydrothermal injection, and the values of the shear modulus may lead to significant variations in the magnitude, rate, and geometry of ground surface displacement, or uplift. Some of the simulated uplift rates are similar to observed uplift rates in large calderas, suggesting that the injection of aqueous fluids into the shallow crust may explain some of the deformation observed in calderas.

  5. Large-eddy simulation of turbulent circular jet flows

    SciTech Connect

    Jones, S. C.; Sotiropoulos, F.; Sale, M. J.

    2002-07-01

    This report presents a numerical method for carrying out large-eddy simulations (LES) of turbulent free shear flows and an application of a method to simulate the flow generated by a nozzle discharging into a stagnant reservoir. The objective of the study was to elucidate the complex features of the instantaneous flow field to help interpret the results of recent biological experiments in which live fish were exposed to the jet shear zone. The fish-jet experiments were conducted at the Pacific Northwest National Laboratory (PNNL) under the auspices of the U.S. Department of Energy’s Advanced Hydropower Turbine Systems program. The experiments were designed to establish critical thresholds of shear and turbulence-induced loads to guide the development of innovative, fish-friendly hydropower turbine designs.

  6. Large eddy simulation of sheet to cloud cavitation

    NASA Astrophysics Data System (ADS)

    Bhatt, Mrugank; Mahesh, Krishnan

    2016-11-01

    Large eddy simulation is used to study sheet to cloud cavitation. A homogeneous mixture model is employed to represent the multiphase mixture of water and water vapor. A novel predictor-corrector method is used to numerically solve the compressible Navier-Stokes equations for the liquid/vapor mixture along with a transport equation for the vapor mass fraction. The algorithm is implemented on an unstructured grid and a parallel platform, with a fully coupled implicit time advancement of both viscous and advection terms. Simulation of sheet to cloud cavitation over a wedge at a Reynolds number, Re = 200, 000 and cavitation number, σ = 2 . 1 is performed. A propagating condensation shock similar to the one observed in the experiments of Harish et al. is observed in the computed flow field. Results will be presented and the flow physics will be discussed. This work is supported by the Office of Naval Research.

  7. Implicit large eddy simulation of shock-driven material mixing.

    PubMed

    Grinstein, F F; Gowardhan, A A; Ristorcelli, J R

    2013-11-28

    Under-resolved computer simulations are typically unavoidable in practical turbulent flow applications exhibiting extreme geometrical complexity and a broad range of length and time scales. An important unsettled issue is whether filtered-out and subgrid spatial scales can significantly alter the evolution of resolved larger scales of motion and practical flow integral measures. Predictability issues in implicit large eddy simulation of under-resolved mixing of material scalars driven by under-resolved velocity fields and initial conditions are discussed in the context of shock-driven turbulent mixing. The particular focus is on effects of resolved spectral content and interfacial morphology of initial conditions on transitional and late-time turbulent mixing in the fundamental planar shock-tube configuration.

  8. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  9. Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses

    NASA Astrophysics Data System (ADS)

    Rutkowska, Eva; Baker, Colin; Nahum, Alan

    2010-04-01

    A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.

  10. Large Eddy Simulation of Vertical Axis Wind Turbine Wakes

    NASA Astrophysics Data System (ADS)

    Shamsoddin, Sina; Porté-Agel, Fernando

    2014-05-01

    In this study, large-eddy simulation (LES) is combined with a turbine model to investigate the wake behind a vertical-axis wind turbine (VAWT) in a three dimensional turbulent flow. Two methods are used to model the subgrid-scale (SGS) stresses: (a) the Smagorinsky model, and (b) the modulated gradient model. To parameterize the effects of the VAWT on the flow, two VAWT models are developed: (a) the actuator surface model (ASM), in which the time-averaged turbine-induced forces are distributed on a surface swept by the turbine blades, i.e. the actuator surface, and (b) the actuator line model (ALM), in which the instantaneous blade forces are only spatially distributed on lines representing the blades, i.e. the actuator lines. This is the first time that LES is applied and validated for simulation of VAWT wakes by using either the ASM or the ALM techniques. In both models, blade-element theory is used to calculate the lift and drag forces on the blades. The results are compared with flow measurements in the wake of a model straight-bladed VAWT, carried out in the Institute de Méchanique et Statistique de la Turbulence (IMST) water channel. Different combinations of SGS models with VAWT models are studied and a fairly good overall agreement between simulation results and measurement data is observed. In general, the ALM is found to better capture the unsteady-periodic nature of the wake and shows a better agreement with the experimental data compared with the ASM. The modulated gradient model is also found to be a more reliable SGS stress modeling technique, compared with the Smagorinsky model, and it yields reasonable predictions of the mean flow and turbulence characteristics of a VAWT wake using its theoretically-determined model coefficient. Keywords: Vertical-axis wind turbines (VAWTs); VAWT wake; Large-eddy simulation; Actuator surface model; Actuator line model; Smagorinsky model; Modulated gradient model

  11. SimGen: A General Simulation Method for Large Systems.

    PubMed

    Taylor, William R

    2017-02-03

    SimGen is a stand-alone computer program that reads a script of commands to represent complex macromolecules, including proteins and nucleic acids, in a structural hierarchy that can then be viewed using an integral graphical viewer or animated through a high-level application programming interface in C++. Structural levels in the hierarchy range from α-carbon or phosphate backbones through secondary structure to domains, molecules, and multimers with each level represented in an identical data structure that can be manipulated using the application programming interface. Unlike most coarse-grained simulation approaches, the higher-level objects represented in SimGen can be soft, allowing the lower-level objects that they contain to interact directly. The default motion simulated by SimGen is a Brownian-like diffusion that can be set to occur across all levels of representation in the hierarchy. Links can also be defined between objects, which, when combined with large high-level random movements, result in an effective search strategy for constraint satisfaction, including structure prediction from predicted pairwise distances. The implementation of SimGen makes use of the hierarchic data structure to avoid unnecessary calculation, especially for collision detection, allowing it to be simultaneously run and viewed on a laptop computer while simulating large systems of over 20,000 objects. It has been used previously to model complex molecular interactions including the motion of a myosin-V dimer "walking" on an actin fibre, RNA stem-loop packing, and the simulation of cell motion and aggregation. Several extensions to this original functionality are described.

  12. Rapid estimate of solid volume in large tuff cores using a gas pycnometer

    SciTech Connect

    Thies, C.; Geddis, A.M.; Guzman, A.G.

    1996-09-01

    A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1{degrees}C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm{sup 3} with less than 1% error.

  13. Large-volume diamond cells for neutron diffraction above 90GPa

    SciTech Connect

    Boehler, Reinhard; Guthrie, Malcolm; Molaison, Jamie J; Moreira Dos Santos, Antonio F; Sinogeikin, Stanislav; Machida, Shinichi; Pradhan, Neelam; Tulk, Christopher A

    2013-01-01

    Quantitative high pressure neutron-diffraction measurements have traditionally required large sample volumes of at least 25 mm3 due to limited neutron flux. Therefore, pressures in these experiments have been limited to below 25 GPa. In comparison, for X-ray diffraction, sample volumes in conventional diamond cells for pressures up to 100 GPa have been less than 1 10 4 mm3. Here, we report a new design of strongly supported conical diamond anvils for neutron diffraction that has reached 94 GPa with a sample volume of 2 10 2 mm3, a 100-fold increase. This sample volume is sufficient to measure full neutron-diffraction patterns of D2O ice to this pressure at the high flux Spallation Neutrons and Pressure beamline at the Oak Ridge National Laboratory. This provides an almost fourfold extension of the previous pressure regime for such measurements.

  14. Contrail Formation in Aircraft Wakes Using Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Paoli, R.; Helie, J.; Poinsot, T. J.; Ghosal, S.

    2002-01-01

    In this work we analyze the issue of the formation of condensation trails ("contrails") in the near-field of an aircraft wake. The basic configuration consists in an exhaust engine jet interacting with a wing-tip training vortex. The procedure adopted relies on a mixed Eulerian/Lagrangian two-phase flow approach; a simple micro-physics model for ice growth has been used to couple ice and vapor phases. Large eddy simulations have carried out at a realistic flight Reynolds number to evaluate the effects of turbulent mixing and wake vortex dynamics on ice-growth characteristics and vapor thermodynamic properties.

  15. Large eddy simulation of the flow in a transpired channel

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Moin, Parviz; Ferziger, Joel

    1989-01-01

    The flow in a transpired channel has been computed by large eddy simulation. The numerical results compare very well with experimental data. Blowing decreases the wall shear stress and enhances turbulent fluctuations, while suction has the opposite effect. The wall layer thickness normalized by the local wall shear velocity and kinematic viscosity increases on the blowing side of the channel and decreases on the suction side. Suction causes more rapid decay of the spectra, larger mean streak spacing and higher two-point correlations. On the blowing side, the wall layer structures lie at a steeper angle to the wall, whereas on the suction side this angle is shallower.

  16. Large-eddy simulation of a turbulent mixing layer

    NASA Technical Reports Server (NTRS)

    Mansour, N. N.; Ferziger, J. H.; Reynolds, W. C.

    1978-01-01

    The three dimensional, time dependent (incompressible) vorticity equations were used to simulate numerically the decay of isotropic box turbulence and time developing mixing layers. The vorticity equations were spatially filtered to define the large scale turbulence field, and the subgrid scale turbulence was modeled. A general method was developed to show numerical conservation of momentum, vorticity, and energy. The terms that arise from filtering the equations were treated (for both periodic boundary conditions and no stress boundary conditions) in a fast and accurate way by using fast Fourier transforms. Use of vorticity as the principal variable is shown to produce results equivalent to those obtained by use of the primitive variable equations.

  17. On integrating large eddy simulation and laboratory turbulent flow experiments.

    PubMed

    Grinstein, Fernando F

    2009-07-28

    Critical issues involved in large eddy simulation (LES) experiments relate to the treatment of unresolved subgrid scale flow features and required initial and boundary condition supergrid scale modelling. The inherently intrusive nature of both LES and laboratory experiments is noted in this context. Flow characterization issues becomes very challenging ones in validation and computational laboratory studies, where potential sources of discrepancies between predictions and measurements need to be clearly evaluated and controlled. A special focus of the discussion is devoted to turbulent initial condition issues.

  18. Parallel finite element simulation of large ram-air parachutes

    NASA Astrophysics Data System (ADS)

    Kalro, V.; Aliabadi, S.; Garrard, W.; Tezduyar, T.; Mittal, S.; Stein, K.

    1997-06-01

    In the near future, large ram-air parachutes are expected to provide the capability of delivering 21 ton payloads from altitudes as high as 25,000 ft. In development and test and evaluation of these parachutes the size of the parachute needed and the deployment stages involved make high-performance computing (HPC) simulations a desirable alternative to costly airdrop tests. Although computational simulations based on realistic, 3D, time-dependent models will continue to be a major computational challenge, advanced finite element simulation techniques recently developed for this purpose and the execution of these techniques on HPC platforms are significant steps in the direction to meet this challenge. In this paper, two approaches for analysis of the inflation and gliding of ram-air parachutes are presented. In one of the approaches the point mass flight mechanics equations are solved with the time-varying drag and lift areas obtained from empirical data. This approach is limited to parachutes with similar configurations to those for which data are available. The other approach is 3D finite element computations based on the Navier-Stokes equations governing the airflow around the parachute canopy and Newtons law of motion governing the 3D dynamics of the canopy, with the forces acting on the canopy calculated from the simulated flow field. At the earlier stages of canopy inflation the parachute is modelled as an expanding box, whereas at the later stages, as it expands, the box transforms to a parafoil and glides. These finite element computations are carried out on the massively parallel supercomputers CRAY T3D and Thinking Machines CM-5, typically with millions of coupled, non-linear finite element equations solved simultaneously at every time step or pseudo-time step of the simulation.

  19. Synthetic turbulence, fractal interpolation, and large-eddy simulation.

    PubMed

    Basu, Sukanta; Foufoula-Georgiou, Efi; Porté-Agel, Fernando

    2004-08-01

    Fractal interpolation has been proposed in the literature as an efficient way to construct closure models for the numerical solution of coarse-grained Navier-Stokes equations. It is based on synthetically generating a scale-invariant subgrid-scale field and analytically evaluating its effects on large resolved scales. In this paper, we propose an extension of previous work by developing a multiaffine fractal interpolation scheme and demonstrate that it preserves not only the fractal dimension but also the higher-order structure functions and the non-Gaussian probability density function of the velocity increments. Extensive a priori analyses of atmospheric boundary layer measurements further reveal that this multiaffine closure model has the potential for satisfactory performance in large-eddy simulations. The pertinence of this newly proposed methodology in the case of passive scalars is also discussed.

  20. Mechanically Cooled Large-Volume Germanium Detector Systems for Nuclear Explosion Monitoring DOENA27323-1

    SciTech Connect

    Hull, E.L.

    2006-07-28

    Compact maintenance free mechanical cooling systems are being developed to operate large volume germanium detectors for field applications. To accomplish this we are utilizing a newly available generation of Stirling-cycle mechanical coolers to operate the very largest volume germanium detectors with no maintenance. The user will be able to leave these systems unplugged on the shelf until needed. The flip of a switch will bring a system to life in ~ 1 hour for measurements. The maintenance-free operating lifetime of these detector systems will exceed 5 years. These features are necessary for remote long-duration liquid-nitrogen free deployment of large-volume germanium gamma-ray detector systems for Nuclear Explosion Monitoring. The Radionuclide Aerosol Sampler/Analyzer (RASA) will greatly benefit from the availability of such detectors by eliminating the need for liquid nitrogen at RASA sites while still allowing the very largest available germanium detectors to be reliably utilized.

  1. High Energy Performance Tests of Large Volume LaBr{sub 3}:Ce Detector

    SciTech Connect

    Naqvi, A.A.; Gondal, M.A.; Khiari, F.Z.; Dastageer, M.A.; Maslehuddin, M.M.; Al-Amoudi, O.S.B.

    2015-07-01

    High energy prompt gamma ray tests of a large volume cylindrical 100 mm x 100 mm (height x diameter) LaBr{sub 3}:Ce detector were carried out using a portable neutron generator-based Prompt Gamma Neutron Activation Analysis (PGNAA) setup. In this study prompt gamma-rays yield were measured from water samples contaminated with toxic elements such nickel, chromium and mercury compounds with gamma ray energies up to 10 MeV. The experimental yield of prompt gamma-rays from toxic elements were compared with the results of Monte Carlo calculations. In spite of its higher intrinsic background due to its larger volume, an excellent agreement between the experimental and calculated yields of high energy gamma-rays from Ni, Cr and Hg samples has been achieved for the large volume LaBr{sub 3}:Ce detector. (authors)

  2. Microstructure from simulated Brownian suspension flows at large shear rate

    NASA Astrophysics Data System (ADS)

    Morris, Jeffrey F.; Katyal, Bhavana

    2002-06-01

    Pair microstructure of concentrated Brownian suspensions in simple-shear flow is studied by sampling of configurations from dynamic simulations by the Stokesian Dynamics technique. Simulated motions are three dimensional with periodic boundary conditions to mimic an infinitely extended suspension. Hydrodynamic interactions through Newtonian fluid and Brownian motion are the only physical influences upon the motion of the monodisperse hard-sphere particles. The dimensionless parameters characterizing the suspension are the particle volume fraction and Péclet number, defined, respectively, as φ=(4π/3)na3 with n the number density and a the sphere radius, and Pe=6πηγ˙a3/kT with η the fluid viscosity, γ˙ the shear rate, and kT the thermal energy. The majority of the results reported are from simulations at Pe=1000; results of simulations at Pe=1, 25, and 100 are also reported for φ=0.3 and φ=0.45. The pair structure is characterized by the pair distribution function, g(r)=P1|1(r)/n, where P1|1(r) is the conditional probability of finding a pair at a separation vector r. The structure under strong shearing exhibits an accumulation of pair probability at contact, and angular distortion (from spherical symmetry at Pe=0), with both effects increasing with Pe. Flow simulations were performed at Pe=1000 for eight volume fractions in the range 0.2⩽φ⩽0.585. For φ=0.2-0.3, the pair structure at contact, g(|r|=2)≡g(2), is found to exhibit a single region of strong correlation, g(2)≫1, at points around the axis of compression, with a particle-deficient wake in the extensional zones. A qualitative change in microstructure is observed between φ=0.3 and φ=0.37. For φ⩾0.37, the maximum g(2) lies at points in the shear plane nearly on the x axis of the bulk simple shear flow Ux=γ˙y, while at smaller φ, the maximum g(2) lies near the compressional axis; long-range string ordering is not observed. For φ=0.3 and φ=0.45, g(2)˜Pe0.7 for 1⩽Pe⩽1000, a

  3. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.

    1990-01-01

    This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.

  4. Evaluating lossy data compression on climate simulation data within a large ensemble

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter

    2016-12-01

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying

  5. Development of a Solid Phase Extraction Method for Agricultural Pesticides in Large-Volume Water Samples

    EPA Science Inventory

    An analytical method using solid phase extraction (SPE) and analysis by gas chromatography/mass spectrometry (GC/MS) was developed for the trace determination of a variety of agricultural pesticides and selected transformation products in large-volume high-elevation lake water sa...

  6. A New Electropositive Filter for Concentrating Enterovirus and Norovirus from Large Volumes of Water - MCEARD

    EPA Science Inventory

    The detection of enteric viruses in environmental water usually requires the concentration of viruses from large volumes of water. The 1MDS electropositive filter is commonly used for concentrating enteric viruses from water but unfortunately these filters are not cost-effective...

  7. Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modeling

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-02-01

    Measuring the volume weighted velocity power spectrum suffers from a severe systematic error due to imperfect sampling of the velocity field from the inhomogeneous distribution of dark matter particles/halos in simulations or galaxies with velocity measurement. This "sampling artifact" depends on both the mean particle number density n¯P and the intrinsic large scale structure (LSS) fluctuation in the particle distribution. (1) We report robust detection of this sampling artifact in N -body simulations. It causes ˜12 % underestimation of the velocity power spectrum at k =0.1 h /Mpc for samples with n¯ P=6 ×10-3 (Mpc /h )-3 . This systematic underestimation increases with decreasing n¯P and increasing k . Its dependence on the intrinsic LSS fluctuations is also robustly detected. (2) All of these findings are expected based upon our theoretical modeling in paper I [P. Zhang, Y. Zheng, and Y. Jing, Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling, arXiv:1405.7125.]. In particular, the leading order theoretical approximation agrees quantitatively well with the simulation result for n¯ P≳6 ×10-4 (Mpc /h )-3 . Furthermore, we provide an ansatz to take high order terms into account. It improves the model accuracy to ≲1 % at k ≲0.1 h /Mpc over 3 orders of magnitude in n¯P and over typical LSS clustering from z =0 to z =2 . (3) The sampling artifact is determined by the deflection D field, which is straightforwardly available in both simulations and data of galaxy velocity. Hence the sampling artifact in the velocity power spectrum measurement can be self-calibrated within our framework. By applying such self-calibration in simulations, it is promising to determine the real large scale velocity bias of 1013M⊙ halos with ˜1 % accuracy, and that of lower mass halos with better accuracy. (4) In contrast to suppressing the velocity power spectrum at large scale, the sampling artifact causes an overestimation of the velocity

  8. Turbine Engine Control Synthesis. Volume 2. Simulation and Controller Software

    DTIC Science & Technology

    1975-03-01

    kinds of engines the cost to design * should be less than for presently used methods. Volume I summarizes opti-- rma l ’ontrol dt-l-ign methodology, A...Unclassified @acs.?’V CLASIPMCATHIO OF THIS PAOR(Mba D~a 5u 20. Abstract iContinued) A cowmand controller is synthesized and wind tunnel tested...There is stron stability. Volume II contains three Appendices. Appendix A contains the details of engine math models, The softwara for the wind

  9. Large eddy simulation of incompressible turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Moin, P.; Reynolds, W. C.; Ferziger, J. H.

    1978-01-01

    The three-dimensional, time-dependent primitive equations of motion were numerically integrated for the case of turbulent channel flow. A partially implicit numerical method was developed. An important feature of this scheme is that the equation of continuity is solved directly. The residual field motions were simulated through an eddy viscosity model, while the large-scale field was obtained directly from the solution of the governing equations. An important portion of the initial velocity field was obtained from the solution of the linearized Navier-Stokes equations. The pseudospectral method was used for numerical differentiation in the horizontal directions, and second-order finite-difference schemes were used in the direction normal to the walls. The large eddy simulation technique is capable of reproducing some of the important features of wall-bounded turbulent flows. The resolvable portions of the root-mean square wall pressure fluctuations, pressure velocity-gradient correlations, and velocity pressure-gradient correlations are documented.

  10. Assessment of dynamic closure for premixed combustion large eddy simulation

    NASA Astrophysics Data System (ADS)

    Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan

    2015-09-01

    Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.

  11. Large Eddy Simulations of Colorless Distributed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Abdulrahman, Husam F.; Jaberi, Farhad; Gupta, Ashwani

    2014-11-01

    Development of efficient and low-emission colorless distributed combustion (CDC) systems for gas turbine applications require careful examination of the role of various flow and combustion parameters. Numerical simulations of CDC in a laboratory-scale combustor have been conducted to carefully examine the effects of these parameters on the CDC. The computational model is based on a hybrid modeling approach combining large eddy simulation (LES) with the filtered mass density function (FMDF) equations, solved with high order numerical methods and complex chemical kinetics. The simulated combustor operates based on the principle of high temperature air combustion (HiTAC) and has shown to significantly reduce the NOx, and CO emissions while improving the reaction pattern factor and stability without using any flame stabilizer and with low pressure drop and noise. The focus of the current work is to investigate the mixing of air and hydrocarbon fuels and the non-premixed and premixed reactions within the combustor by the LES/FMDF with the reduced chemical kinetic mechanisms for the same flow conditions and configurations investigated experimentally. The main goal is to develop better CDC with higher mixing and efficiency, ultra-low emission levels and optimum residence time. The computational results establish the consistency and the reliability of LES/FMDF and its Lagrangian-Eulerian numerical methodology.

  12. Surrogate population models for large-scale neural simulations.

    PubMed

    Tripp, Bryan P

    2015-06-01

    Because different parts of the brain have rich interconnections, it is not possible to model small parts realistically in isolation. However, it is also impractical to simulate large neural systems in detail. This article outlines a new approach to multiscale modeling of neural systems that involves constructing efficient surrogate models of populations. Given a population of neuron models with correlated activity and with specific, nonrandom connections, a surrogate model is constructed in order to approximate the aggregate outputs of the population. The surrogate model requires less computation than the neural model, but it has a clear and specific relationship with the neural model. For example, approximate spike rasters for specific neurons can be derived from a simulation of the surrogate model. This article deals specifically with neural engineering framework (NEF) circuits of leaky-integrate-and-fire point neurons. Weighted sums of spikes are modeled by interpolating over latent variables in the population activity, and linear filters operate on gaussian random variables to approximate spike-related fluctuations. It is found that the surrogate models can often closely approximate network behavior with orders-of-magnitude reduction in computational demands, although there are certain systematic differences between the spiking and surrogate models. Since individual spikes are not modeled, some simulations can be performed with much longer steps sizes (e.g., 20 ms). Possible extensions to non-NEF networks and to more complex neuron models are discussed.

  13. Upgrade Of ESA Large Space Simulator For Providing Mercury Environment

    NASA Astrophysics Data System (ADS)

    Messing, Rene; Popovitch, Alexandre; Tavares, Andre; Sablerolle, Steven

    2012-07-01

    When orbiting Mercury, the BepiColombo spacecraft will have to survive direct sunlight ten times more intense than in the Earth's vicinity, and the infrared radiation from the planet's surface, which exceeds 400°C at its hottest point. In order to simulate the environment for testing the spacecraft in thermal conditions as representative as possible to those it will meet in Mercury’s orbit, it was required to modify the ESTEC Large Space Simulator (LSS) for providing a 10 Solar Constant (SC) illumination. The following test facility adaptations are described: - Investigate powerful lamps - Configure the LSS mirror from 6m to a 2.7m-diameter light beam - Develop a fast flux mapping system - Procure a 10 SC absolute radiometer standard - Replace the sun simulator flux control sensors - Add a dedicated shroud to absorb the high flux - Add a levelling table to adjust heat pipes - Add infra-red cameras for contactless high temperature measurements. The facility performance during the test of one of the BepiColombo modules is reviewed.

  14. Large-timestep mover for particle simulations of arbitrarilymagnetized species

    SciTech Connect

    Cohen, R.H.; Friedman, A.; Grote, D.P.; Vay, J-L.

    2007-03-26

    For self-consistent ion-beam simulations including electron motion, it is desirable to be able to follow electron dynamics accurately without being constrained by the electron cyclotron timescale. To this end, we have developed a particle-advance that interpolates between full particle dynamics and drift motion. By making a proper choice of interpolation parameter, simulation particles experience physically correct parallel dynamics, drift motion, and gyroradius when the timestep is large compared to the cyclotron period, though the effective gyro frequency is artificially low; in the opposite timestep limit, the method approaches a conventional Boris particle push. By combining this scheme with a Poisson solver that includes an interpolated form of the polarization drift in the dielectric response, the movers utility can be extended to higher-density problems where the plasma frequency of the species being advanced exceeds its cyclotron frequency. We describe a series of tests of the mover and its application to simulation of electron clouds in heavy-ion accelerators.

  15. A Large Eddy Simulation Study for upstream wind energy conditioning

    NASA Astrophysics Data System (ADS)

    Sharma, V.; Calaf, M.; Parlange, M. B.

    2013-12-01

    The wind energy industry is increasingly focusing on optimal power extraction strategies based on layout design of wind farms and yaw alignment algorithms. Recent field studies by Mikkelsen et al. (Wind Energy, 2013) have explored the possibility of using wind lidar technology installed at hub height to anticipate incoming wind direction and strength for optimizing yaw alignment. In this work we study the benefits of using remote sensing technology for predicting the incoming flow by using large eddy simulations of a wind farm. The wind turbines are modeled using the classic actuator disk concept with rotation, together with a new algorithm that permits the turbines to adapt to varying flow directions. This allows for simulations of a more realistic atmospheric boundary layer driven by a time-varying geostrophic wind. Various simulations are performed to investigate possible improvement in power generation by utilizing upstream data. Specifically, yaw-correction of the wind-turbine is based on spatio-temporally averaged wind values at selected upstream locations. Velocity and turbulence intensity are also considered at those locations. A base case scenario with the yaw alignment varying according to wind data measured at the wind turbine's hub is also used for comparison. This reproduces the present state of the art where wind vanes and cup anemometers installed behind the rotor blades are used for alignment control.

  16. Large eddy simulations of a turbulent thermal plume

    NASA Astrophysics Data System (ADS)

    Yan, Zhenghua H.

    2007-04-01

    Large eddy simulations of a three-dimensional turbulent thermal plume in an open environment have been carried out using a self-developed parallel computational fluid dynamics code SMAFS (smoke movement and flame spread) to study the thermal plume’s dynamics including its puffing, self-preserving and air entrainment. In the simulation, the sub-grid stress was modeled using both the standard Smagorinsky and the buoyancy modified Smagorinsky models, which were compared. The sub-grid scale (SGS) scalar flux in the filtered enthalpy transport equation was modeled based on a simple gradient transport hypothesis with constant SGS Prandtl number. The effect of the Smagorinsky model constant and the SGS Prandtl number were examined. The computation results were compared with experimental measurements, thermal plume theory and empirical correlations, showing good agreement. It is found that both the buoyancy modification and the SGS turbulent Prandtl number have little influence on simulation. However, the SGS model constant C s has a significant effect on the prediction of plume spreading, although it does not affect much the prediction of puffing.

  17. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  18. Large eddy simulation of a pumped- storage reservoir

    NASA Astrophysics Data System (ADS)

    Launay, Marina; Leite Ribeiro, Marcelo; Roman, Federico; Armenio, Vincenzo

    2016-04-01

    The last decades have seen an increasing number of pumped-storage hydropower projects all over the world. Pumped-storage schemes move water between two reservoirs located at different elevations to store energy and to generate electricity following the electricity demand. Thus the reservoirs can be subject to important water level variations occurring at the daily scale. These new cycles leads to changes in the hydraulic behaviour of the reservoirs. Sediment dynamics and sediment budgets are modified, sometimes inducing problems of erosion and deposition within the reservoirs. With the development of computer performances, the use of numerical techniques has become popular for the study of environmental processes. Among numerical techniques, Large Eddy Simulation (LES) has arisen as an alternative tool for problems characterized by complex physics and geometries. This work uses the LES-COAST Code, a LES model under development in the framework of the Seditrans Project, for the simulation of an Upper Alpine Reservoir of a pumped-storage scheme. Simulations consider the filling (pump mode) and emptying (turbine mode) of the reservoir. The hydraulic results give a better understanding of the processes occurring within the reservoir. They are considered for an assessment of the sediment transport processes and of their consequences.

  19. Unsteady RANS and Large Eddy simulations of multiphase diesel injection

    NASA Astrophysics Data System (ADS)

    Philipp, Jenna; Green, Melissa; Akih-Kumgeh, Benjamin

    2015-11-01

    Unsteady Reynolds Averaged Navier-Stokes (URANS) and Large Eddy Simulations (LES) of two-phase flow and evaporation of high pressure diesel injection into a quiescent, high temperature environment is investigated. Unsteady RANS and LES are turbulent flow simulation approaches used to determine complex flow fields. The latter allows for more accurate predictions of complex phenomena such as turbulent mixing and physio-chemical processes associated with diesel combustion. In this work we investigate a high pressure diesel injection using the Euler-Lagrange method for multiphase flows as implemented in the Star-CCM+ CFD code. A dispersed liquid phase is represented by Lagrangian particles while the multi-component gas phase is solved using an Eulerian method. Results obtained from the two approaches are compared with respect to spray penetration depth and air entrainment. They are also compared with experimental data taken from the Sandia Engine Combustion Network for ``Spray A''. Characteristics of primary and secondary atomization are qualitatively evaluated for all simulation modes.

  20. Large eddy simulation of mechanical mixing in anaerobic digesters.

    PubMed

    Wu, Binxin

    2012-03-01

    A comprehensive study of anaerobic digestion requires an advanced turbulence model technique to accurately predict mixing flow patterns because the digestion process that involves mass transfer between anaerobes and their substrates is primarily dependent on detailed information about the fine structure of turbulence in the digesters. This study presents a large eddy simulation (LES) of mechanical agitation of non-Newtonian fluids in anaerobic digesters, in which the sliding mesh method is used to characterize the impeller rotation. The three subgrid scale (SGS) models investigated are: (i) Smagorinsky-Lilly model, (ii) wall-adapting local eddy-viscosity model, and (iii) kinetic energy transport (KET) model. The simulation results show that the three SGS models produce very similar flow fields. A comparison of the simulated and measured axial velocities indicates that the LES profile shapes are in general agreement with the experimental data but they differ markedly in velocity magnitudes. A check of impeller power and flow numbers demonstrates that all the SGS models give excellent predictions, with the KET model performing the best. Moreover, the performance of six Reynolds-averaged Navier-Stokes turbulence models are assessed and compared with the LES results.

  1. Large-Eddy simulation of pulsatile blood flow.

    PubMed

    Paul, Manosh C; Mamun Molla, Md; Roditi, Giles

    2009-01-01

    Large-Eddy simulation (LES) is performed to study pulsatile blood flow through a 3D model of arterial stenosis. The model is chosen as a simple channel with a biological type stenosis formed on the top wall. A sinusoidal non-additive type pulsation is assumed at the inlet of the model to generate time dependent oscillating flow in the channel and the Reynolds number of 1200, based on the channel height and the bulk velocity, is chosen in the simulations. We investigate in detail the transition-to-turbulent phenomena of the non-additive pulsatile blood flow downstream of the stenosis. Results show that the high level of flow recirculation associated with complex patterns of transient blood flow have a significant contribution to the generation of the turbulent fluctuations found in the post-stenosis region. The importance of using LES in modelling pulsatile blood flow is also assessed in the paper through the prediction of its sub-grid scale contributions. In addition, some important results of the flow physics are achieved from the simulations, these are presented in the paper in terms of blood flow velocity, pressure distribution, vortices, shear stress, turbulent fluctuations and energy spectra, along with their importance to the relevant medical pathophysiology.

  2. Evaluation of Bacillus oleronius as a Biological Indicator for Terminal Sterilization of Large-Volume Parenterals.

    PubMed

    Izumi, Masamitsu; Fujifuru, Masato; Okada, Aki; Takai, Katsuya; Takahashi, Kazuhiro; Udagawa, Takeshi; Miyake, Makoto; Naruyama, Shintaro; Tokuda, Hiroshi; Nishioka, Goro; Yoden, Hikaru; Aoki, Mitsuo

    2016-01-01

    In the production of large-volume parenterals in Japan, equipment and devices such as tanks, pipework, and filters used in production processes are exhaustively cleaned and sterilized, and the cleanliness of water for injection, drug materials, packaging materials, and manufacturing areas is well controlled. In this environment, the bioburden is relatively low, and less heat resistant compared with microorganisms frequently used as biological indicators such as Geobacillus stearothermophilus (ATCC 7953) and Bacillus subtilis 5230 (ATCC 35021). Consequently, the majority of large-volume parenteral solutions in Japan are manufactured under low-heat sterilization conditions of F0 <2 min, so that loss of clarity of solutions and formation of degradation products of constituents are minimized. Bacillus oleronius (ATCC 700005) is listed as a biological indicator in "Guidance on the Manufacture of Sterile Pharmaceutical Products Produced by Terminal Sterilization" (guidance in Japan, issued in 2012). In this study, we investigated whether B. oleronius is an appropriate biological indicator of the efficacy of low-heat, moist-heat sterilization of large-volume parenterals. Specifically, we investigated the spore-forming ability of this microorganism in various cultivation media and measured the D-values and z-values as parameters of heat resistance. The D-values and z-values changed depending on the constituents of large-volume parenteral products. Also, the spores from B. oleronius showed a moist-heat resistance that was similar to or greater than many of the spore-forming organisms isolated from Japanese parenteral manufacturing processes. Taken together, these results indicate that B. oleronius is suitable as a biological indicator for sterility assurance of large-volume parenteral solutions subjected to low-heat, moist-heat terminal sterilization.

  3. Large-Eddy Simulation Code Developed for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2003-01-01

    A large-eddy simulation (LES) code was developed at the NASA Glenn Research Center to provide more accurate and detailed computational analyses of propulsion flow fields. The accuracy of current computational fluid dynamics (CFD) methods is limited primarily by their inability to properly account for the turbulent motion present in virtually all propulsion flows. Because the efficiency and performance of a propulsion system are highly dependent on the details of this turbulent motion, it is critical for CFD to accurately model it. The LES code promises to give new CFD simulations an advantage over older methods by directly computing the large turbulent eddies, to correctly predict their effect on a propulsion system. Turbulent motion is a random, unsteady process whose behavior is difficult to predict through computer simulations. Current methods are based on Reynolds-Averaged Navier- Stokes (RANS) analyses that rely on models to represent the effect of turbulence within a flow field. The quality of the results depends on the quality of the model and its applicability to the type of flow field being studied. LES promises to be more accurate because it drastically reduces the amount of modeling necessary. It is the logical step toward improving turbulent flow predictions. In LES, the large-scale dominant turbulent motion is computed directly, leaving only the less significant small turbulent scales to be modeled. As part of the prediction, the LES method generates detailed information on the turbulence itself, providing important information for other applications, such as aeroacoustics. The LES code developed at Glenn for propulsion flow fields is being used to both analyze propulsion system components and test improved LES algorithms (subgrid-scale models, filters, and numerical schemes). The code solves the compressible Favre-filtered Navier- Stokes equations using an explicit fourth-order accurate numerical scheme, it incorporates a compressible form of

  4. Anatomically Detailed and Large-Scale Simulations Studying Synapse Loss and Synchrony Using NeuroBox

    PubMed Central

    Breit, Markus; Stepniewski, Martin; Grein, Stephan; Gottmann, Pascal; Reinhardt, Lukas; Queisser, Gillian

    2016-01-01

    The morphology of neurons and networks plays an important role in processing electrical and biochemical signals. Based on neuronal reconstructions, which are becoming abundantly available through databases such as NeuroMorpho.org, numerical simulations of Hodgkin-Huxley-type equations, coupled to biochemical models, can be performed in order to systematically investigate the influence of cellular morphology and the connectivity pattern in networks on the underlying function. Development in the area of synthetic neural network generation and morphology reconstruction from microscopy data has brought forth the software tool NeuGen. Coupling this morphology data (either from databases, synthetic, or reconstruction) to the simulation platform UG 4 (which harbors a neuroscientific portfolio) and VRL-Studio, has brought forth the extendible toolbox NeuroBox. NeuroBox allows users to perform numerical simulations on hybrid-dimensional morphology representations. The code basis is designed in a modular way, such that e.g., new channel or synapse types can be added to the library. Workflows can be specified through scripts or through the VRL-Studio graphical workflow representation. Third-party tools, such as ImageJ, can be added to NeuroBox workflows. In this paper, NeuroBox is used to study the electrical and biochemical effects of synapse loss vs. synchrony in neurons, to investigate large morphology data sets within detailed biophysical simulations, and used to demonstrate the capability of utilizing high-performance computing infrastructure for large scale network simulations. Using new synapse distribution methods and Finite Volume based numerical solvers for compartment-type models, our results demonstrate how an increase in synaptic synchronization can compensate synapse loss at the electrical and calcium level, and how detailed neuronal morphology can be integrated in large-scale network simulations. PMID:26903818

  5. Large scale simulation of red blood cell aggregation in shear flows.

    PubMed

    Xu, Dong; Kaliviotis, Efstathios; Munjiza, Ante; Avital, Eldad; Ji, Chunning; Williams, John

    2013-07-26

    Aggregation of highly deformable red blood cells (RBCs) significantly affects the blood flow in the human circulatory system. To investigate the effect of deformation and aggregation of RBCs in blood flow, a mathematical model has been established by coupling the interaction between the fluid and the deformable solids. The model includes a three-dimensional finite volume method solver for incompressible viscous flows, the combined finite-discrete element method for computing the deformation of the RBCs, a JKR model-Johnson, Kendall and Roberts (1964-1971) (Johnson et al., 1971) to take account of the adhesion forces between different RBCs and an iterative direct-forcing immersed boundary method to couple the fluid-solid interactions. The flow of 49,512 RBCs at 45% concentration under the influence of aggregating forces was examined, improving the existing knowledge on simulating flow and structural characteristics of blood at a large scale: previous studies on the particular issue were restricted to simulating the flow of 13,000 aggregative ellipsoidal particles at a 10% concentration. The results are in excellent agreement with experimental studies. More specifically, both the experimental and the simulation results show uniform RBC distributions under high shear rates (60-100/s) whereas large aggregation structures were observed under a lower shear rate of 10/s. The statistical analysis of the simulation data also shows that the shear rate has significant influence on both the flow velocity profiles and the frequency distribution of the RBC orientation angles.

  6. The large volume radiometric calorimeter system: A transportable device to measure scrap category plutonium

    SciTech Connect

    Duff, M.F.; Wetzel, J.R.; Breakall, K.L.; Lemming, J.F.

    1987-01-01

    An innovative design concept has been used to design a large volume calorimeter system. The new design permits two measuring cells to fit in a compact, nonevaporative environmental bath. The system is mounted on a cart for transportability. Samples in the power range of 0.50 to 12.0 W can be measured. The calorimeters will receive samples as large as 22.0 cm in diameter by 43.2 cm high, and smaller samples can be measured without lengthening measurement time or increasing measurement error by using specially designed sleeve adapters. This paper describes the design considerations, construction, theory, applications, and performance of the large volume calorimeter system. 2 refs., 5 figs., 1 tab.

  7. Large-Scale Numerical Simulation of Fluid Structure Interactions in Low Reynolds Number Flows

    NASA Astrophysics Data System (ADS)

    Eken, Ali; Sahin, Mehmet

    2011-11-01

    A fully coupled numerical algorithm has been developed for the numerical simulation of large-scale fluid structure interaction problems. The incompressible Navier-Stokes equations are discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the side-centered unstructured finite volume method. A special attention is given to satisfy the discrete continuity equation within each element at discrete level as well as the Geometric Conservation Law (GCL). The linear elasticity equations are discretized within the structure domain using the Galerkin finite element method. The resulting algebraic linear equations are solved in a fully coupled form using a monolitic multigrid method. The implementation of the fully coupled iterative solvers is based on the PETSc library for improving the efficiency of the parallel code. The present numerical algorithm is initially validated for a beam in cross flow and then it is used to simulate the fluid structure interaction of a membrane-wing micro aerial vehicle (MAV).

  8. Large-Eddy Simulation of Maritime Deep Tropical Convection

    NASA Astrophysics Data System (ADS)

    Khairoutdinov, Marat F.; Krueger, Steve K.; Moeng, Chin-Hoh; Bogenschutz, Peter A.; Randall, David A.

    2009-04-01

    This study represents an attempt to apply Large-Eddy Simulation (LES) resolution to simulate deep tropical convection in near equilibrium for 24 hours over an area of about 205 × 205 km2, which is comparable to that of a typical horizontal grid cell in a global climate model. The simulation is driven by large-scale thermodynamic tendencies derived from mean conditions during the GATE Phase III field experiment. The LES uses 2048 × 2048 × 256 grid points with horizontal grid spacing of 100 m and vertical grid spacing ranging from 50 m in the boundary layer to 100 m in the free troposphere. The simulation reaches a near equilibrium deep convection regime in 12 hours. The simulated vertical cloud distribution exhibits a tri-modal vertical distribution of deep, middle and shallow clouds similar to that often observed in Tropics. A sensitivity experiment in which cold pools are suppressed by switching off the evaporation of precipitation results in much lower amounts of shallow and congestus clouds. Unlike the benchmark LES where the new deep clouds tend to appear along the edges of spreading cold pools, the deep clouds in the no-cold-pool experiment tend to reappear at the sites of the previous deep clouds and tend to be surrounded by extensive areas of sporadic shallow clouds. The vertical velocity statistics of updraft and downdraft cores below 6 km height are compared to aircraft observations made during GATE. The comparison shows generally good agreement, and strongly suggests that the LES simulation can be used as a benchmark to represent the dynamics of tropical deep convection on scales ranging from large turbulent eddies to mesoscale convective systems. The effect of horizontal grid resolution is examined by running the same case with progressively larger grid sizes of 200, 400, 800, and 1600 m. These runs show a reasonable agreement with the benchmark LES in statistics such as convective available potential energy, convective inhibition, cloud fraction

  9. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  10. Nesting Large-Eddy Simulations Within Mesoscale Simulations for Wind Energy Applications

    NASA Astrophysics Data System (ADS)

    Lundquist, J. K.; Mirocha, J. D.; Chow, F. K.; Kosovic, B.; Lundquist, K. A.

    2008-12-01

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES) account for complex terrain and resolve individual atmospheric eddies on length scales smaller than turbine blades. These small-domain high-resolution simulations are possible with a range of commercial and open- source software, including the Weather Research and Forecasting (WRF) model. In addition to "local" sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecating model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosoviæ (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Shuttle mission simulator baseline definition report, volume 1

    NASA Technical Reports Server (NTRS)

    Burke, J. F.; Small, D. E.

    1973-01-01

    A baseline definition of the space shuttle mission simulator is presented. The subjects discussed are: (1) physical arrangement of the complete simulator system in the appropriate facility, with a definition of the required facility modifications, (2) functional descriptions of all hardware units, including the operational features, data demands, and facility interfaces, (3) hardware features necessary to integrate the items into a baseline simulator system to include the rationale for selecting the chosen implementation, and (4) operating, maintenance, and configuration updating characteristics of the simulator hardware.

  12. Large Eddy Simulation for Oscillating Airfoils with Large Pitching and Surging Motions

    NASA Astrophysics Data System (ADS)

    Kocher, Alexander; Cumming, Reed; Tran, Steven; Sahni, Onkar

    2016-11-01

    Many applications of interest involve unsteady aerodynamics due to time varying flow conditions (e.g. in the case of flapping wings, rotorcrafts and wind turbines). In this study, we formulate and apply large eddy simulation (LES) to investigate flow over airfoils at a moderate mean angle of attack with large pitching and surging motions. Current LES methodology entails three features: i) a combined subgrid scale model in the context of stabilized finite element methods, ii) local variational Germano identity (VGI) along with Lagrangian averaging, and iii) arbitrary Lagrangian-Eulerian (ALE) description over deforming unstructured meshes. Several cases are considered with different types of motions including surge only, pitch only and a combination of the two. The flow structures from these cases are analyzed and the numerical results are compared to experimental data when available.

  13. A survey of electric and hybrid vehicles simulation programs. Volume 2: Questionnaire responses

    NASA Technical Reports Server (NTRS)

    Bevan, J.; Heimburger, D. A.; Metcalfe, M. A.

    1978-01-01

    The data received in a survey conducted within the United States to determine the extent of development and capabilities of automotive performance simulation programs suitable for electric and hybrid vehicle studies are presented. The survey was conducted for the Department of Energy by NASA's Jet Propulsion Laboratory. Volume 1 of this report summarizes and discusses the results contained in Volume 2.

  14. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  15. Computing transitional flows using wall-modeled large eddy simulation

    NASA Astrophysics Data System (ADS)

    Bodart, Julien; Larsson, Johan

    2012-11-01

    To be applicable to complex aerodynamic flows at realistic Reynolds numbers, large eddy simulation (LES) must be combined with a model for the inner part of the boundary layer. Aerodynamic flows are, in general, sensitive to the location of boundary layer transition. While traditional LES can predict the transition location and process accurately, existing wall-modeled LES approaches can not. In the present work, the behavior of the wall-model is locally adapted using a sensor in the LES-resolved part of boundary layer. This sensor estimates whether the boundary layer is turbulent or not, in a way that does not rely on any homogeneous direction. The proposed method is validated on controlled transition scenarios on a flat plat boundary layer, and finally applied to the flow around a multi-element airfoil at realistic Reynolds number.

  16. Smoothed particle hydrodynamics method from a large eddy simulation perspective

    NASA Astrophysics Data System (ADS)

    Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.

    2017-03-01

    The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.

  17. Arc plasma simulation of the KAERI large ion source.

    PubMed

    In, S R; Jeong, S H; Kim, T S

    2008-02-01

    The KAERI large ion source, developed for the KSTAR NBI system, recently produced ion beams of 100 keV, 50 A levels in the first half campaign of 2007. These results seem to be the best performance of the present ion source at a maximum available input power of 145 kW. A slight improvement in the ion source is certainly necessary to attain the final goal of an 8 MW ion beam. Firstly, the experimental results were analyzed to differentiate the cause and effect for the insufficient beam currents. Secondly, a zero dimensional simulation was carried out on the ion source plasma to identify which factors control the arc plasma and to find out what improvements can be expected.

  18. Large Eddy Simulation of FDA's Idealized Medical Device.

    PubMed

    Delorme, Yann T; Anupindi, Kameswararao; Frankel, Steven H

    2013-12-01

    A hybrid large eddy simulation (LES) and immersed boundary method (IBM) computational approach is used to make quantitative predictions of flow field statistics within the Food and Drug Administration's (FDA) idealized medical device. An in-house code is used, hereafter (W enoHemo(™) ), that combines high-order finite-difference schemes on structured staggered Cartesian grids with an IBM to facilitate flow over or through complex stationary or rotating geometries and employs a subgrid-scale (SGS) turbulence model that more naturally handles transitional flows [2]. Predictions of velocity and wall shear stress statistics are compared with previously published experimental measurements from Hariharan et al. [6] for the four Reynolds numbers considered.

  19. Resonators for solid-state lasers with large-volume fundamental mode and high alignment stability

    SciTech Connect

    Magni, V.

    1986-01-01

    Resonators containing a focusing rod are thoroughly analyzed. It is shown that, as a function of the dioptric power of the rod, two stability zones of the same width exist and that the mode volume in the rod always presents a stationary point. At this point, the output power is insensitive to the focal length fluctuations, and the mode volume inside the rod is inversely proportional to the range of the input power for which the resonator is stable. The two zones are markedly different with respect to misalignment sensitivity, which is, in general, much greater in one zone than in the other. Two design procedures are presented for monomode solid-state laser resonators with large mode volume and low sensitivity both to focal length fluctuations and to misalignment.

  20. Large-scale ground motion simulation using GPGPU

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  1. The terminal area simulation system. Volume 2: Verification cases

    NASA Technical Reports Server (NTRS)

    Proctor, F. H.

    1987-01-01

    The numerical simulation of five case studies are presented and are compared with available data in order to verify the three-dimensional version of the Terminal Area Simulation System (TASS). A spectrum of convective storm types are selected for the case studies. Included are: a High-Plains supercell hailstorm, a small and relatively short-lived High-Plains cumulonimbus, a convective storm which produced the 2 August 1985 DFW microburst, a South Florida convective complex, and a tornadic Oklahoma thunderstorm. For each of the cases the model results compared reasonably well with observed data. In the simulations of the supercell storms many of their characteristic features were modeled, such as the hook echo, BWER, mesocyclone, gust fronts, giant persistent updraft, wall cloud, flanking-line towers, anvil and radar reflectivity overhang, and rightward veering in the storm propagation. In the simulation of the tornadic storm a horseshoe-shaped updraft configuration and cyclic changes in storm intensity and structure were noted. The simulation of the DFW microburst agreed remarkably well with sparse observed data. The simulated outflow rapidly expanded in a nearly symmetrical pattern and was associated with a ringvortex. A South Florida convective complex was simulated and contained updrafts and downdrafts in the form of discrete bubbles. The numerical simulations, in all cases, always remained stable and bounded with no anomalous trends.

  2. Program to Optimize Simulated Trajectories (POST). Volume 1: Formulation manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    A general purpose FORTRAN program for simulating and optimizing point mass trajectories (POST) of aerospace vehicles is described. The equations and the numerical techniques used in the program are documented. Topics discussed include: coordinate systems, planet model, trajectory simulation, auxiliary calculations, and targeting and optimization.

  3. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  4. The BAHAMAS project: calibrated hydrodynamical simulations for large-scale structure cosmology

    NASA Astrophysics Data System (ADS)

    McCarthy, Ian G.; Schaye, Joop; Bird, Simeon; Le Brun, Amandine M. C.

    2017-03-01

    The evolution of the large-scale distribution of matter is sensitive to a variety of fundamental parameters that characterize the dark matter, dark energy, and other aspects of our cosmological framework. Since the majority of the mass density is in the form of dark matter that cannot be directly observed, to do cosmology with large-scale structure, one must use observable (baryonic) quantities that trace the underlying matter distribution in a (hopefully) predictable way. However, recent numerical studies have demonstrated that the mapping between observable and total mass, as well as the total mass itself, are sensitive to unresolved feedback processes associated with galaxy formation, motivating explicit calibration of the feedback efficiencies. Here, we construct a new suite of large-volume cosmological hydrodynamical simulations (called BAHAMAS, for BAryons and HAloes of MAssive Systems), where subgrid models of stellar and active galactic nucleus feedback have been calibrated to reproduce the present-day galaxy stellar mass function and the hot gas mass fractions of groups and clusters in order to ensure the effects of feedback on the overall matter distribution are broadly correct. We show that the calibrated simulations reproduce an unprecedentedly wide range of properties of massive systems, including the various observed mappings between galaxies, hot gas, total mass, and black holes, and represent a significant advance in our ability to mitigate the primary systematic uncertainty in most present large-scale structure tests.

  5. BASIC Simulation Programs; Volumes III and IV. Mathematics, Physics.

    ERIC Educational Resources Information Center

    Digital Equipment Corp., Maynard, MA.

    The computer programs presented here were developed as a part of the Huntington Computer Project. They were tested on a Digital Equipment Corporation TSS-8 time-shared computer and run in a version of BASIC. Mathematics and physics programs are presented in this volume. The 20 mathematics programs include ones which review multiplication skills;…

  6. Large-eddy simulation of combustion dynamics in swirling flows

    NASA Astrophysics Data System (ADS)

    Stone, Christopher Pritchard

    The impact of premixer swirl number, S, and overall fuel equivalence ratio, phi, on the stability of a model swirl-stabilized, lean-premixed gas turbine combustor has been numerically investigated using a massively-parallel Large-Eddy Simulations Combustion Dynamics model. Through the use of a premixed combustion model, unsteady vortex-flame and acoustic-flame interactions are captured. It is observed that for flows with swirl intensity high enough to form Vortex-Breakdown (i.e., a phenomena associated with a large region of reverse or recirculating flow along the axis of rotation), the measured rms pressure amplitude (p') are attenuated significantly (over 6.6 dB reduction) compared to flows without this phenomena. The reduced p' amplitudes are accompanied by reduced longitudinal flame-front oscillations and reduced coherence in the shed vortices. Similar p' reduction levels are achieved through changes in the operating equivalence ratio, phi. Compared to the leanest equivalence ratio simulated (phi = 0.52), p' at a stoichiometric mixture is reduced by 6.0 dB. Methodologies for active control based on modulation of the inlet Swirl number (S, a measure of the intensity of swirl) and phi are also investigated. Open-loop control through S variation is demonstrated for a lean mixture with a significant reduction in the fluctuating mass-flow-rate and p' after a convective time-delay. A partially-premixed combustion model, which allows for variations in the local phi, is used to model both temporal and spatial variations in phi. It is found that the response time to changes in phi are much faster than those for changes in S. Also, it is shown that spatial variations in phi (or unmixedness) actually lead to p' attenuation in the current combustor configuration.

  7. Background simulations for the Large Area Detector onboard LOFT

    NASA Astrophysics Data System (ADS)

    Campana, Riccardo; Feroci, Marco; Del Monte, Ettore; Mineo, Teresa; Lund, Niels; Fraser, George W.

    2013-12-01

    The Large Observatory For X-ray Timing (LOFT), currently in an assessment phase in the framework the ESA M3 Cosmic Vision programme, is an innovative medium-class mission specifically designed to answer fundamental questions about the behaviour of matter, in the very strong gravitational and magnetic fields around compact objects and in supranuclear density conditions. Having an effective area of ˜10 m2 at 8 keV, LOFT will be able to measure with high sensitivity very fast variability in the X-ray fluxes and spectra. A good knowledge of the in-orbit background environment is essential to assess the scientific performance of the mission and optimize the design of its main instrument, the Large Area Detector (LAD). In this paper the results of an extensive Geant-4 simulation of the instrumentwillbe discussed, showing the main contributions to the background and the design solutions for its reduction and control. Our results show that the current LOFT/LAD design is expected to meet its scientific requirement of a background rate equivalent to 10 mCrab in 2‒30 keV, achieving about 5 mCrab in the most important 2-10 keV energy band. Moreover, simulations show an anticipated modulation of the background rate as small as 10 % over the orbital timescale. The intrinsic photonic origin of the largest background component also allows for an efficient modelling, supported by an in-flight active monitoring, allowing to predict systematic residuals significantly better than the requirement of 1 %, and actually meeting the 0.25 % science goal.

  8. Large eddy simulation for aerodynamics: status and perspectives.

    PubMed

    Sagaut, Pierre; Deck, Sébastien

    2009-07-28

    The present paper provides an up-to-date survey of the use of large eddy simulation (LES) and sequels for engineering applications related to aerodynamics. Most recent landmark achievements are presented. Two categories of problem may be distinguished whether the location of separation is triggered by the geometry or not. In the first case, LES can be considered as a mature technique and recent hybrid Reynolds-averaged Navier-Stokes (RANS)-LES methods do not allow for a significant increase in terms of geometrical complexity and/or Reynolds number with respect to classical LES. When attached boundary layers have a significant impact on the global flow dynamics, the use of hybrid RANS-LES remains the principal strategy to reduce computational cost compared to LES. Another striking observation is that the level of validation is most of the time restricted to time-averaged global quantities, a detailed analysis of the flow unsteadiness being missing. Therefore, a clear need for detailed validation in the near future is identified. To this end, new issues, such as uncertainty and error quantification and modelling, will be of major importance. First results dealing with uncertainty modelling in unsteady turbulent flow simulation are presented.

  9. Large eddy simulation of a plane turbulent wall jet

    NASA Astrophysics Data System (ADS)

    Dejoan, A.; Leschziner, M. A.

    2005-02-01

    The mean-flow and turbulence properties of a plane wall jet, developing in a stagnant environment, are studied by means of large eddy simulation. The Reynolds number, based on the inlet velocity Uo and the slot height b, is Re=9600, corresponding to recent well-resolved laser Doppler velocimetry and pulsed hot wire measurements of Eriksson et al. The relatively low Reynolds number and the high numerical resolution adopted (8.4 million nodes) allow all scales larger than about 10 Kolmogorov lengths to be captured. Of particular interest are the budgets for turbulence energy and Reynolds stresses, not available from experiments, and their inclusion sheds light on the processes which play a role in the interaction between the near-wall layer and the outer shear layer. Profiles of velocity and turbulent Reynolds stresses in the self-similar region are presented in inner and outer scaling and compared to experimental data. Included are further results for skin friction, evolution of integral quantities and third-order moments. Good agreement is observed, in most respects, between the simulated flow and the corresponding experiment. The budgets demonstrate, among a number of mechanisms, the decisive role played by turbulent transport (via the third moments) in the interaction region, across which information is transmitted between the near-wall layer and the outer layer.

  10. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  11. Constitutive modeling of large inelastic deformation of amorphous polymers: Free volume and shear transformation zone dynamics

    NASA Astrophysics Data System (ADS)

    Voyiadjis, George Z.; Samadi-Dooki, Aref

    2016-06-01

    Due to the lack of the long-range order in their molecular structure, amorphous polymers possess a considerable free volume content in their inter-molecular space. During finite deformation, these free volume holes serve as the potential sites for localized permanent plastic deformation inclusions which are called shear transformation zones (STZs). While the free volume content has been experimentally shown to increase during the course of plastic straining in glassy polymers, thermal analysis of stored energy due to the deformation shows that the STZ nucleation energy decreases at large plastic strains. The evolution of the free volume, and the STZs number density and nucleation energy during the finite straining are formulated in this paper in order to investigate the uniaxial post-yield softening-hardening behavior of the glassy polymers. This study shows that the reduction of the STZ nucleation energy, which is correlated with the free volume increase, brings about the post-yield primary softening of the amorphous polymers up to the steady-state strain value; and the secondary hardening is a result of the increased number density of the STZs, which is required for large plastic strains, while their nucleation energy is stabilized beyond the steady-state strain. The evolutions of the free volume content and STZ nucleation energy are also used to demonstrate the effect of the strain rate, temperature, and thermal history of the sample on its post-yield behavior. The obtained results from the model are compared with the experimental observations on poly(methyl methacrylate) which show a satisfactory consonance.

  12. Direct numerical simulation of scalar transport using unstructured finite-volume schemes

    NASA Astrophysics Data System (ADS)

    Rossi, Riccardo

    2009-03-01

    An unstructured finite-volume method for direct and large-eddy simulations of scalar transport in complex geometries is presented and investigated. The numerical technique is based on a three-level fully implicit time advancement scheme and central spatial interpolation operators. The scalar variable at cell faces is obtained by a symmetric central interpolation scheme, which is formally first-order accurate, or by further employing a high-order correction term which leads to formal second-order accuracy irrespective of the underlying grid. In this framework, deferred-correction and slope-limiter techniques are introduced in order to avoid numerical instabilities in the resulting algebraic transport equation. The accuracy and robustness of the code are initially evaluated by means of basic numerical experiments where the flow field is assigned a priori. A direct numerical simulation of turbulent scalar transport in a channel flow is finally performed to validate the numerical technique against a numerical dataset established by a spectral method. In spite of the linear character of the scalar transport equation, the computed statistics and spectra of the scalar field are found to be significantly affected by the spectral-properties of interpolation schemes. Although the results show an improved spectral-resolution and greater spatial-accuracy for the high-order operator in the analysis of basic scalar transport problems, the low-order central scheme is found superior for high-fidelity simulations of turbulent scalar transport.

  13. WEST-3 wind turbine simulator development. Volume 2: Verification

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    The details of a study to validate WEST-3, a new time wind turbine simulator developed by Paragib Pacific Inc., are presented in this report. For the validation, the MOD-0 wind turbine was simulated on WEST-3. The simulation results were compared with those obtained from previous MOD-0 simulations, and with test data measured during MOD-0 operations. The study was successful in achieving the major objective of proving that WEST-3 yields results which can be used to support a wind turbine development process. The blade bending moments, peak and cyclic, from the WEST-3 simulation correlated reasonably well with the available MOD-0 data. The simulation was also able to predict the resonance phenomena observed during MOD-0 operations. Also presented in the report is a description and solution of a serious numerical instability problem encountered during the study. The problem was caused by the coupling of the rotor and the power train models. The results of the study indicate that some parts of the existing WEST-3 simulation model may have to be refined for future work; specifically, the aerodynamics and procedure used to couple the rotor model with the tower and the power train models.

  14. Large eddy simulations of in-cylinder turbulent flows.

    NASA Astrophysics Data System (ADS)

    Banaeizadeh, Araz; Afshari, Asghar; Schock, Harold; Jaberi, Farhad

    2007-11-01

    A high-order numerical model is developed and tested for large eddy simulation (LES) of turbulent flows in internal combustion (IC) engines. In this model, the filtered compressible Navier-Stokes equations in curvilinear coordinate systems are solved via a generalized high-order multi-block compact differencing scheme. The LES model has been applied to three flow configurations: (1) a fixed poppet valve in a sudden expansion, (2) a simple piston-cylinder assembly with a stationary open valve and harmonically moving flat piston, (3) a laboratory single-cylinder engine with three moving intake and exhaust valves. The first flow configuration is considered for studying the flow around the valves in IC engines. The second flow configuration is closer to that in IC engines but is based on a single stationary intake/exhaust valve and relatively simple geometry. It is considered in this work for better understating of the effects of moving piston on the large-scale unsteady vortical fluid motions in the cylinder and for further validation of our LES model. The third flow configuration includes all the complexities involve in a realistic single-cylinder IC engine. The predicted flow statistics by LES show good comparison with the available experimental data.

  15. Large eddy simulation modelling of combustion for propulsion applications.

    PubMed

    Fureby, C

    2009-07-28

    Predictive modelling of turbulent combustion is important for the development of air-breathing engines, internal combustion engines, furnaces and for power generation. Significant advances in modelling non-reactive turbulent flows are now possible with the development of large eddy simulation (LES), in which the large energetic scales of the flow are resolved on the grid while modelling the effects of the small scales. Here, we discuss the use of combustion LES in predictive modelling of propulsion applications such as gas turbine, ramjet and scramjet engines. The LES models used are described in some detail and are validated against laboratory data-of which results from two cases are presented. These validated LES models are then applied to an annular multi-burner gas turbine combustor and a simplified scramjet combustor, for which some additional experimental data are available. For these cases, good agreement with the available reference data is obtained, and the LES predictions are used to elucidate the flow physics in such devices to further enhance our knowledge of these propulsion systems. Particular attention is focused on the influence of the combustion chemistry, turbulence-chemistry interaction, self-ignition, flame holding burner-to-burner interactions and combustion oscillations.

  16. Geophysics Under Pressure: Large-Volume Presses Versus the Diamond-Anvil Cell

    NASA Astrophysics Data System (ADS)

    Hazen, R. M.

    2002-05-01

    Prior to 1970, the legacy of Harvard physicist Percy Bridgman dominated high-pressure geophysics. Massive presses with large-volume devices, including piston-cylinder, opposed-anvil, and multi-anvil configurations, were widely used in both science and industry to achieve a range of crustal and upper mantle temperatures and pressures. George Kennedy of UCLA was a particularly influential advocate of large-volume apparatus for geophysical research prior to his death in 1980. The high-pressure scene began to change in 1959 with the invention of the diamond-anvil cell, which was designed simultaneously and independently by John Jamieson at the University of Chicago and Alvin Van Valkenburg at the National Bureau of Standards in Washington, DC. The compact, inexpensive diamond cell achieved record static pressures and had the advantage of optical access to the high-pressure environment. Nevertheless, members of the geophysical community, who favored the substantial sample volumes, geothermally relevant temperature range, and satisfying bulk of large-volume presses, initially viewed the diamond cell with indifference or even contempt. Several factors led to a gradual shift in emphasis from large-volume presses to diamond-anvil cells in geophysical research during the 1960s and 1970s. These factors include (1) their relatively low cost at time of fiscal restraint, (2) Alvin Van Valkenburg's new position as a Program Director at the National Science Foundation in 1964 (when George Kennedy's proposal for a Nation High-Pressure Laboratory was rejected), (3) the development of lasers and micro-analytical spectroscopic techniques suitable for analyzing samples in a diamond cell, and (4) the attainment of record pressures (e.g., 100 GPa in 1975 by Mao and Bell at the Geophysical Laboratory). Today, a more balanced collaborative approach has been adopted by the geophysics and mineral physics community. Many high-pressure laboratories operate a new generation of less expensive

  17. Shuttle mission simulator requirements report, volume 1, revision C

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The contractor tasks required to produce a shuttle mission simulator for training crew members and ground personnel are discussed. The tasks will consist of the design, development, production, installation, checkout, and field support of a simulator with two separate crew stations. The tasks include the following: (1) review of spacecraft changes and incorporation of appropriate changes in simulator hardware and software design, and (2) the generation of documentation of design, configuration management, and training used by maintenance and instructor personnel after acceptance for each of the crew stations.

  18. Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.; Bergman, R. O.; Bonstrom, D. B.; Brinkman, T. W.; Chiu, S. H. J.; Green, S. S.; Hansen, S. D.; Klein, D. L.; Krohn, H. E.; Prow, R. P.

    1979-01-01

    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation.

  19. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not

  20. Assembly, operation and disassembly manual for the Battelle Large Volume Water Sampler (BLVWS)

    SciTech Connect

    Thomas, V.W.; Campbell, R.M.

    1984-12-01

    Assembly, operation and disassembly of the Battelle Large Volume Water Sampler (BLVWS) are described in detail. Step by step instructions of assembly, general operation and disassembly are provided to allow an operator completely unfamiliar with the sampler to successfully apply the BLVWS to his research sampling needs. The sampler permits concentration of both particulate and dissolved radionuclides from large volumes of ocean and fresh water. The water sample passes through a filtration section for particle removal then through sorption or ion exchange beds where species of interest are removed. The sampler components which contact the water being sampled are constructed of polyvinylchloride (PVC). The sampler has been successfully applied to many sampling needs over the past fifteen years. 9 references, 8 figures.

  1. HYBRID BRIDGMAN ANVIL DESIGN: AN OPTICAL WINDOW FOR IN-SITU SPECTROSCOPY IN LARGE VOLUME PRESSES

    SciTech Connect

    Lipp, M J; Evans, W J; Yoo, C S

    2005-07-29

    The absence of in-situ optical probes for large volume presses often limits their application to high-pressure materials research. In this paper, we present a unique anvil/optical window-design for use in large volume presses, which consists of an inverted diamond anvil seated in a Bridgman type anvil. A small cylindrical aperture through the Bridgman anvil ending at the back of diamond anvil allows optical access to the sample chamber and permits direct optical spectroscopy measurements, such as ruby fluorescence (in-situ pressure) or Raman spectroscopy. This performance of this anvil-design has been demonstrated by loading KBr to a pressure of 14.5 GPa.

  2. The large volume calorimeter for measuring the pressure cooker'' shipping container

    SciTech Connect

    Kasperski, P.W.; Duff, M.F.; Wetzel, J.R. ); Baker, L.B.; MacMurdo, K.W. )

    1991-01-01

    A precise, low wattage, large volume calorimeter system has been developed at Mound to measure two configurations of the 12081 containment vessel. This system was developed and constructed to perform verification measurements at the Savannah River Site. The calorimeter system has performance design specifications of {plus minus}0.3% error above the 2-watt level, and {plus minus}(0.03% plus 0.006 watts) at power levels below 2 watts (one sigma). Data collected during performance testing shows measurement errors well within this range, even down to 0.1-watt power levels. The development of this calorimeter shows that ultra-precise measurements can be achieved on extremely large volume sample configurations. 1 ref., 5 figs.

  3. Large-aperture chirped volume Bragg grating based fiber CPA system.

    PubMed

    Liao, Kai-Hsiu; Cheng, Ming-Yuan; Flecher, Emilie; Smirnov, Vadim I; Glebov, Leonid B; Galvanauskas, Almantas

    2007-04-16

    A fiber chirped pulse amplification system at 1558 nm was demonstrated using a large-aperture volume Bragg grating stretcher and compressor made of Photo-Thermal-Refractive (PTR) glass. Such PTR glass based gratings represent a new type of pulse stretching and compressing devices which are compact, monolithic and optically efficient. Furthermore, since PTR glass technology enables volume gratings with transverse apertures which are large, homogeneous and scalable, it also enables high pulse energies and powers far exceeding those achievable with other existing compact pulse-compression technologies. Additionally, reciprocity of chirped gratings with respect to stretching and compression also enables to address a long-standing problem in CPA system design of stretcher-compressor dispersion mismatch.

  4. Large-aperture chirped volume Bragg grating based fiber CPA system

    NASA Astrophysics Data System (ADS)

    Liao, Kai-Hsiu; Cheng, Ming-Yuan; Flecher, Emilie; Smirnov, Vadim I.; Glebov, Leonid B.; Galvanauskas, Almantas

    2007-04-01

    A fiber chirped pulse amplification system at 1558nm was demonstrated using a large-aperture volume Bragg grating stretcher and compressor made of Photo-Thermal-Refractive (PTR) glass. Such PTR glass based gratings represent a new type of pulse stretching and compressing devices which are compact, monolithic and optically efficient. Furthermore, since PTR glass technology enables volume gratings with transverse apertures which are large, homogeneous and scalable, it also enables high pulse energies and powers far exceeding those achievable with other existing compact pulse-compression technologies. Additionally, reciprocity of chirped gratings with respect to stretching and compression also enables to address a long-standing problem in CPA system design of stretcher-compressor dispersion mismatch.

  5. Computer simulation of reflective volume grating holographic data storage.

    PubMed

    Gombköt, Balázs; Koppa, Pál; Süt, Attila; L Rincz, Em Ke

    2007-07-01

    The shift selectivity of a reflective-type spherical reference wave volume hologram is investigated using a nonparaxial numerical modeling based on a multiple-thin-layer implementation of a volume integral equation. The method can be easily parallelized on multiple computers. According to the results, the falloff of the diffraction efficiency due to the readout shift shows neither Bragg zeros nor oscillation with our parameter set. This agrees with our earlier study of smaller and transmissive holograms. Interhologram cross talk of shift-multiplexed holograms is also modeled using the same method, together with sparse modulation block coding and correlation decoding of data. Signal-to-noise ratio and raw bit error rate values are calculated.

  6. Technical note: rapid, large-volume resuscitation at resuscitative thoracotomy by intra-cardiac catheterization

    PubMed Central

    Cawich, Shamir O; Naraynsingh, Vijay

    2016-01-01

    An emergency thoracotomy may be life-saving by achieving four goals: (i) releasing cardiac tamponade, (ii) controlling haemorrhage, (iii) allowing access for internal cardiac massage and (iv) clamping the descending aorta to isolate circulation to the upper torso in damage control surgery. We theorize that a new goal should be achieving rapid, large-volume fluid resuscitation and we describe a technique to achieve this. PMID:27887010

  7. Rapid Adaptive Optical Recovery of Optimal Resolution over LargeVolumes

    PubMed Central

    Wang, Kai; Milkie, Dan; Saxena, Ankur; Engerer, Peter; Misgeld, Thomas; Bronner, Marianne E.; Mumm, Jeff; Betzig, Eric

    2014-01-01

    Using a de-scanned, laser-induced guide star and direct wavefront sensing, we demonstrate adaptive correction of complex optical aberrations at high numerical aperture and a 14 ms update rate. This permits us to compensate for the rapid spatial variation in aberration often encountered in biological specimens, and recover diffraction-limited imaging over large (> 240 μm)3 volumes. We applied this to image fine neuronal processes and subcellular dynamics within the zebrafish brain. PMID:24727653

  8. Scanning laser optical computed tomography system for large volume 3D dosimetry

    NASA Astrophysics Data System (ADS)

    Dekker, Kurtis H.; Battista, Jerry J.; Jordan, Kevin J.

    2017-04-01

    Stray light causes artifacts in optical computed tomography (CT) that negatively affect the accuracy of radiation dosimetry in gels or solids. Scatter effects are exacerbated by a large dosimeter volume, which is desirable for direct verification of modern radiotherapy treatment plans such as multiple-isocenter radiosurgery. The goal in this study was to design and characterize an optical CT system that achieves high accuracy primary transmission measurements through effective stray light rejection, while maintaining sufficient scan speed for practical application. We present an optical imaging platform that uses a galvanometer mirror for horizontal scanning, and a translation stage for vertical movement of a laser beam and small area detector for minimal stray light production and acceptance. This is coupled with a custom lens-shaped optical CT aquarium for parallel ray sampling of projections. The scanner images 15 cm diameter, 12 cm height cylindrical volumes at 0.33 mm resolution in approximately 30 min. Attenuation coefficients reconstructed from CT scans agreed with independent cuvette measurements within 2% for both absorbing and scattering solutions as well as small 1.25 cm diameter absorbing phantoms placed within a large, scattering medium that mimics gel. Excellent linearity between the optical CT scanner and the independent measurement was observed for solutions with between 90% and 2% transmission. These results indicate that the scanner should achieve highly accurate dosimetry of large volume dosimeters in a reasonable timeframe for clinical application to radiotherapy dose verification procedures.

  9. 3D cell-printing of large-volume tissues: Application to ear regeneration.

    PubMed

    Lee, Jung-Seob; Kim, Byung Soo; Seo, Dong Hwan; Park, Jeong Hun; Cho, Dong-Woo

    2017-01-17

    The three-dimensional (3D) printing of large-volume cells, printed in a clinically relevant size, is one of the most important challenges in the field of tissue engineering. However, few studies have reported the fabrication of large-volume cell-printed constructs (LCCs). To create LCCs, appropriate fabrication conditions should be established: factors involved include fabrication time, residence time, and temperature control of the cell-laden hydrogel in the syringe to ensure high cell viability and functionality. The prolonged time required for 3D printing of LCCs can reduce cell viability and result in insufficient functionality of the construct, because the cells are exposed to a harsh environment during the printing process. In this regard, we present an advanced 3D cell-printing system composed of a clean air workstation, humidifier, and Peltier system, which provides a suitable printing environment for production of LCCs with high cell viability. We confirmed that the advanced 3D cell-printing system was capable of providing enhanced printability of hydrogels and fabricating an ear-shaped LCC with high cell viability. In vivo results for the ear-shaped LCC also showed that printed chondrocytes proliferated sufficiently and differentiated into cartilage tissue. Thus, we conclude that the advanced 3D cell-printing system is a versatile tool to create cell-printed constructs for the generation of large-volume tissues.

  10. Scanning laser optical computed tomography system for large volume 3D dosimetry.

    PubMed

    Dekker, Kurtis H; Battista, Jerry J; Jordan, Kevin J

    2017-04-07

    Stray light causes artifacts in optical computed tomography (CT) that negatively affect the accuracy of radiation dosimetry in gels or solids. Scatter effects are exacerbated by a large dosimeter volume, which is desirable for direct verification of modern radiotherapy treatment plans such as multiple-isocenter radiosurgery. The goal in this study was to design and characterize an optical CT system that achieves high accuracy primary transmission measurements through effective stray light rejection, while maintaining sufficient scan speed for practical application. We present an optical imaging platform that uses a galvanometer mirror for horizontal scanning, and a translation stage for vertical movement of a laser beam and small area detector for minimal stray light production and acceptance. This is coupled with a custom lens-shaped optical CT aquarium for parallel ray sampling of projections. The scanner images 15 cm diameter, 12 cm height cylindrical volumes at 0.33 mm resolution in approximately 30 min. Attenuation coefficients reconstructed from CT scans agreed with independent cuvette measurements within 2% for both absorbing and scattering solutions as well as small 1.25 cm diameter absorbing phantoms placed within a large, scattering medium that mimics gel. Excellent linearity between the optical CT scanner and the independent measurement was observed for solutions with between 90% and 2% transmission. These results indicate that the scanner should achieve highly accurate dosimetry of large volume dosimeters in a reasonable timeframe for clinical application to radiotherapy dose verification procedures.

  11. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.

    1991-01-01

    This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.

  12. Conference on physics from large gamma-ray detec tor arrays. Volume 2: Proceedings

    NASA Astrophysics Data System (ADS)

    The conference on 'Physics from Large gamma-ray Detector Arrays' is a continuation of the series of conferences that have been organized every two years by the North American Heavy-ion Laboratories. The aim of the conference this year was to encourage discussion of the physics that can be studied with such large arrays. This volume is the collected proceedings from this conference. It discusses properties of nuclear states which can be created in heavy-ion reactions, and which can be observed via such detector systems.

  13. Prospects of the search for neutrino bursts from supernovae with Baksan large volume scintillation detector

    NASA Astrophysics Data System (ADS)

    Petkov, V. B.

    2016-11-01

    Observing a high-statistics neutrino signal from the supernova explosions in the Galaxy is a major goal of low-energy neutrino astronomy. The prospects for detecting all flavors of neutrinos and antineutrinos from the core-collapse supernova (ccSN) in operating and forthcoming large liquid scintillation detectors (LLSD) are widely discussed now. One of proposed LLSD is Baksan Large Volume Scintillation Detector (BLVSD). This detector will be installed at the Baksan Neutrino Observatory (BNO) of the Institute for Nuclear Research, Russian Academy of Sciences, at a depth of 4800 m.w.e. Low-energy neutrino astronomy is one of the main lines of research of the BLVSD.

  14. Film cooling from inclined cylindrical holes using large eddy simulations

    NASA Astrophysics Data System (ADS)

    Peet, Yulia V.

    2006-12-01

    The goal of the present study is to investigate numerically the physics of the flow, which occurs during the film cooling from inclined cylindrical holes, Film cooling is a technique used in gas turbine industry to reduce heat fluxes to the turbine blade surface. Large Eddy Simulation (LES) is performed modeling a realistic film cooling configuration, which consists of a large stagnation-type reservoir, feeding an array of discrete cooling holes (film holes) flowing into a flat plate turbulent boundary layer. Special computational methodology is developed for this problem, involving coupled simulations using multiple computational codes. A fully compressible LES code is used in the area above the flat plate, while a low Mach number LES code is employed in the plenum and film holes. The motivation for using different codes comes from the essential difference in the nature of the flow in these different regions. Flowfield is analyzed inside the plenum, film hole and a crossflow region. Flow inside the plenum is stagnating, except for the region close to the exit, where it accelerates rapidly to turn into the hole. The sharp radius of turning at the trailing edge of the plenum pipe connection causes the flow to separate from the downstream wall of the film hole. After coolant injection occurs, a complex flowfield is formed consisting of coherent vortical structures responsible for bringing hot crossflow fluid in contact with the walls of either the film hole or the blade, thus reducing cooling protection. Mean velocity and turbulent statistics are compared to experimental measurements, yielding good agreement for the mean flowfield and satisfactory agreement for the turbulence quantities. LES results are used to assess the applicability of basic assumptions of conventional eddy viscosity turbulence models used with Reynolds-averaged (RANS) approach, namely the isotropy of an eddy viscosity and thermal diffusivity. It is shown here that these assumptions do not hold

  15. Large Eddy Simulation of Vertical Axis Wind Turbines

    NASA Astrophysics Data System (ADS)

    Hezaveh, Seyed Hossein

    Due to several design advantages and operational characteristics, particularly in offshore farms, vertical axis wind turbines (VAWTs) are being reconsidered as a complementary technology to horizontal axial turbines (HAWTs). However, considerable gaps remain in our understanding of VAWT performance since they have been significantly less studied than HAWTs. This thesis examines the performance of isolated VAWTs based on different design parameters and evaluates their characteristics in large wind farms. An actuator line model (ALM) is implemented in an atmospheric boundary layer large eddy simulation (LES) code, with offline coupling to a high-resolution blade-scale unsteady Reynolds-averaged Navier-Stokes (URANS) model. The LES captures the turbine-to-farm scale dynamics, while the URANS captures the blade-to-turbine scale flow. The simulation results are found to be in good agreement with existing experimental datasets. Subsequently, a parametric study of the flow over an isolated VAWT is carried out by varying solidities, height-to-diameter aspect ratios, and tip speed ratios. The analyses of the wake area and power deficits yield an improved understanding of the evolution of VAWT wakes, which in turn enables a more informed selection of turbine designs for wind farms. One of the most important advantages of VAWTs compared to HAWTs is their potential synergistic interactions that increase their performance when placed in close proximity. Field experiments have confirmed that unlike HAWTs, VAWTs can enhance and increase the total power production when placed near each other. Based on these experiments and using ALM-LES, we also present and test new approaches for VAWT farm configuration. We first design clusters with three turbines then configure farms consisting of clusters of VAWTs rather than individual turbines. The results confirm that by using a cluster design, the average power density of wind farms can be increased by as much as 60% relative to regular

  16. Large-volume paracentesis with indwelling peritoneal catheter and albumin infusion: a community hospital study

    PubMed Central

    Martin, Daniel K.; Walayat, Saqib; Jinma, Ren; Ahmed, Zohair; Ragunathan, Karthik; Dhillon, Sonu

    2016-01-01

    Background The management of ascites can be problematic. This is especially true in patients with diuretic refractory ascites who develop a tense abdomen. This often results in hypotension and decreased venous return with resulting renal failure. In this paper, we further examine the risks and benefits of utilizing an indwelling peritoneal catheter to remove large-volume ascites over a 72-h period while maintaining intravascular volume and preventing renal failure. Methods We retrospectively reviewed charts and identified 36 consecutive patients undergoing continuous large-volume paracentesis with an indwelling peritoneal catheter. At the time of drain placement, no patients had signs or laboratory parameters suggestive of spontaneous bacterial peritonitis. The patients underwent ascitic fluid removal through an indwelling peritoneal catheter and were supported with scheduled albumin throughout the duration. The catheter was used to remove up to 3 L every 8 h for a maximum of 72 h. Regular laboratory and ascitic fluid testing was performed. All patients had a clinical follow-up within 3 months after the drain placement. Results An average of 16.5 L was removed over the 72-h time frame of indwelling peritoneal catheter maintenance. The albumin infusion utilized correlated to 12 mg/L removed. The average creatinine trend improved in a statistically significant manner from 1.37 on the day of admission to 1.21 on the day of drain removal. No patients developed renal failure during the hospital course. There were no documented episodes of neutrocytic ascites or bacterial peritonitis throughout the study review. Conclusion Large-volume peritoneal drainage with an indwelling peritoneal catheter is safe and effective for patients with tense ascites. Concomitant albumin infusion allows for maintenance of renal function, and no increase in infectious complications was noted. PMID:27802853

  17. Evaluation of the pressure-volume-temperature (PVT) data of water from experiments and molecular simulations since 1990

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Hu, Jiawen; Mao, Shide; Zhang, Zhigang

    2015-08-01

    Since 1990, many groups of pressure-volume-temperature (PVT) data from experiments and molecular dynamics (MD) or Monte Carlo (MC) simulations have been reported for supercritical and subcritical water. In this work, fifteen groups of PVT data (253.15-4356 K and 0-90.5 GPa) are evaluated in detail with the aid of the highly accurate IAPWS-95 formulation. The evaluation gives the following results: (1) Six datasets are found to be of good accuracy. They include the simulated results based on SPCE potential above 100 MPa and those derived from sound velocity measurements, but the simulated results below 100 MPa have large uncertainties. (2) The data from measurements with a piston cylinder apparatus and simulations with an exp-6 potential contain large uncertainties and systematic deviations. (3) The other seven datasets show obvious systematic deviations. They include those from experiments with synthesized fluid inclusion techniques (three groups), measured velocities of sound (one group), and automated high-pressure dilatometer (one group) and simulations with TIP4P potential (two groups), where the simulated data based on TIP4P potential below 200 MPa have large uncertainties. (4) The simulated data but those below 1 GPa agree with each other within 2-3%, and mostly within 2%. The data from fluid inclusions show similar systematic deviations, which are less than 2-5%. The data obtained with automated high-pressure dilatometer and those derived from sound velocity measurements agree with each other within 0.3-0.6% in most cases, except for those above 10 GPa. In principle, the systematic deviations mentioned above, except for those of the simulated data below 1 GPa, can be largely eliminated or significantly reduced by appropriate corrections, and then the accuracy of the relevant data can be improved significantly. These are very important for the improvement of experiments or simulations and the refinement and correct use of the PVT data in developing

  18. STAGE 64: SIMULATOR PROGRAMMING SPECIFICATIONS MANUAL. VOLUME III. DAMAGE.

    DTIC Science & Technology

    The Damage package of the STAGE Simulator is a group of six complexes which under normal running conditions assess damage to the following five types...preliminary control routine. Under nonoptimal running conditions, the damage assessment is made by the five complexes at the end of each time period during which ground zero has occurred.

  19. Shuttle mission simulator baseline definition report, volume 2

    NASA Technical Reports Server (NTRS)

    Dahlberg, A. W.; Small, D. E.

    1973-01-01

    The baseline definition report for the space shuttle mission simulator is presented. The subjects discussed are: (1) the general configurations, (2) motion base crew station, (3) instructor operator station complex, (4) display devices, (5) electromagnetic compatibility, (6) external interface equipment, (7) data conversion equipment, (8) fixed base crew station equipment, and (9) computer complex. Block diagrams of the supporting subsystems are provided.

  20. Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual

    NASA Technical Reports Server (NTRS)

    Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.

  1. RSRM top hat cover simulator lightning test, volume 1

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The test sequence was to measure electric and magnetic fields induced inside a redesigned solid rocket motor case when a simulated lightning discharge strikes an exposed top hat cover simulator. The test sequence was conducted between 21 June and 17 July 1990. Thirty-six high rate-of-rise Marx generator discharges and eight high current bank discharges were injected onto three different test article configurations. Attach points included three locations on the top hat cover simulator and two locations on the mounting bolts. Top hat cover simulator and mounting bolt damage and grain cover damage was observed. Overall electric field levels were well below 30 kilowatts/meter. Electric field levels ranged from 184.7 to 345.9 volts/meter and magnetic field levels were calculated from 6.921 to 39.73 amperes/meter. It is recommended that the redesigned solid rocket motor top hat cover be used in Configuration 1 or Configuration 2 as an interim lightning protection device until a lightweight cover can be designed.

  2. Analytical simulation of SPS system performance, volume 3, phase 3

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.; Lindsey, W. C.

    1980-01-01

    The simulation model for the Solar Power Satellite spaceantenna and the associated system imperfections are described. Overall power transfer efficiency, the key performance issue, is discussed as a function of the system imperfections. Other system performance measures discussed include average power pattern, mean beam gain reduction, and pointing error.

  3. Program to Optimize Simulated Trajectories (POST). Volume 3: Programmer's manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to the programmer and relating to the program to optimize simulated trajectories (POST) is presented. Topics discussed include: program structure and logic, subroutine listings and flow charts, and internal FORTRAN symbols. The POST core requirements are summarized along with program macrologic.

  4. Shuttle mission simulator requirements report, volume 1, revision A

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The tasks are defined required to design, develop produce, and field support a shuttle mission simulator for training crew members and ground support personnel. The requirements for program management, control, systems engineering, design and development are discussed along with the design and construction standards, software design, control and display, communication and tracking, and systems integration.

  5. Unphysical scalar excursions in large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Matheou, Georgios; Dimotakis, Paul

    2016-11-01

    The range of physically realizable values of passive scalar fields in any flow is bounded by their boundary values. The current investigation focuses on the local conservation of passive scalar concentration fields in turbulent flows and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a turbulent shear flow and examines methods for error diagnosis. Typically, scalar-excursion errors are diagnosed as violations of global boundedness, i.e., detecting scalar-concentration values outside boundary/initial condition bounds. To quantify errors in mixed-fluid regions, a local scalar excursion error metric is defined with respect to the local non-diffusive limit. Analysis of such errors shows that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. Local scalar excursion errors are found not to be correlated with the local scalar-gradient magnitude. This work is supported by AFOSR, DOE, and Caltech.

  6. Saturn: A large area x-ray simulation accelerator

    SciTech Connect

    Bloomquist, D.D.; Stinnett, R.W.; McDaniel, D.H.; Lee, J.R.; Sharpe, A.W.; Halbleib, J.A.; Schlitt, L.G.; Spence, P.W.; Corcoran, P.

    1987-01-01

    Saturn is the result of a major metamorphosis of the Particle Beam Fusion Accelerator-I (PBFA-I) from an ICF research facility to the large-area x-ray source of the Simulation Technology Laboratory (STL) project. Renamed Saturn, for its unique multiple-ring diode design, the facility is designed to take advantage of the numerous advances in pulsed power technology made by the ICF program in recent years and much of the existing PBFA-I support system. Saturn will include significant upgrades in the energy storage and pulse-forming sections. The 36 magnetically insulated transmission lines (MITLs) that provided power flow to the ion diode of PBFA-I were replaced by a system of vertical triplate water transmission lines. These lines are connected to three horizontal triplate disks in a water convolute section. Power will flow through an insulator stack into radial MITLs that drive the three-ring diode. Saturn is designed to operate with a maximum of 750 kJ coupled to the three-ring e-beam diode with a peak power of 25 TW to provide an x-ray exposure capability of 5 x 10/sup 12/ rads/s (Si) and 5 cal/g (Au) over 500 cm/sup 2/.

  7. Dynamically stable implosions in a large simulation dataset

    NASA Astrophysics Data System (ADS)

    Peterson, J. Luc; Field, John; Humbird, Kelli; Brandon, Scott; Langer, Steve; Nora, Ryan; Spears, Brian

    2016-10-01

    Asymmetric implosion drive can severely impact the performance of inertial confinement fusion capsules. In particular the time-varying radiation environment produced in near-vacuum hohlraum experiments at the National Ignition Facility is thought to limit the conversion efficiency of shell kinetic energy into hotspot internal energy. To investigate the role of dynamic asymmetries in implosion behavior we have created a large database of 2D capsule implosions of varying drive amplitude, drive asymmetry and capsule gas fill that spans 13 dimensions and consists of over 60,000 individual simulations. A novel in-transit analysis scheme allowed for the real-time processing of petabytes of raw data into hundreds of terabytes of physical metrics and synthetic images, and supervised learning algorithms identified regions of parameter space that robustly produce high yield. We will discuss the first results from this dataset and explore the dynamics of implosions that produce significant yield under asymmetric drives. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC. LLNL-ABS-697262.

  8. On the Computation of Sound by Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Streett, Craig L.; Sarkar, Sutanu

    1997-01-01

    The effect of the small scales on the source term in Lighthill's acoustic analogy is investigated, with the objective of determining the accuracy of large-eddy simulations when applied to studies of flow-generated sound. The distribution of the turbulent quadrupole is predicted accurately, if models that take into account the trace of the SGS stresses are used. Its spatial distribution is also correct, indicating that the low-wave-number (or frequency) part of the sound spectrum can be predicted well by LES. Filtering, however, removes the small-scale fluctuations that contribute significantly to the higher derivatives in space and time of Lighthill's stress tensor T(sub ij). The rms fluctuations of the filtered derivatives are substantially lower than those of the unfiltered quantities. The small scales, however, are not strongly correlated, and are not expected to contribute significantly to the far-field sound; separate modeling of the subgrid-scale density fluctuations might, however, be required in some configurations.

  9. Large Eddy Simulation of Turbulent Flow in a Ribbed Pipe

    NASA Astrophysics Data System (ADS)

    Kang, Changwoo; Yang, Kyung-Soo

    2011-11-01

    Turbulent flow in a pipe with periodically wall-mounted ribs has been investigated by large eddy simulation with a dynamic subgrid-scale model. The value of Re considered is 98,000, based on hydraulic diameter and mean bulk velocity. An immersed boundary method was employed to implement the ribs in the computational domain. The spacing of the ribs is the key parameter to produce the d-type, intermediate and k-type roughness flows. The mean velocity profiles and turbulent intensities obtained from the present LES are in good agreement with the experimental measurements currently available. Turbulence statistics, including budgets of the Reynolds stresses, were computed, and analyzed to elucidate turbulence structures, especially around the ribs. In particular, effects of the ribs are identified by comparing the turbulence structures with those of smooth pipe flow. The present investigation is relevant to the erosion/corrosion that often occurs around a protruding roughness in a pipe system. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0008457).

  10. A family of dynamic models for large-eddy simulation

    NASA Technical Reports Server (NTRS)

    Carati, D.; Jansen, K.; Lund, T.

    1995-01-01

    Since its first application, the dynamic procedure has been recognized as an effective means to compute rather than prescribe the unknown coefficients that appear in a subgrid-scale model for Large-Eddy Simulation (LES). The dynamic procedure is usually used to determine the nondimensional coefficient in the Smagorinsky (1963) model. In reality the procedure is quite general and it is not limited to the Smagorinsky model by any theoretical or practical constraints. The purpose of this note is to consider a generalized family of dynamic eddy viscosity models that do not necessarily rely on the local equilibrium assumption built into the Smagorinsky model. By invoking an inertial range assumption, it will be shown that the coefficients in the new models need not be nondimensional. This additional degree of freedom allows the use of models that are scaled on traditionally unknown quantities such as the dissipation rate. In certain cases, the dynamic models with dimensional coefficients are simpler to implement, and allow for a 30% reduction in the number of required filtering operations.

  11. Simulation of fatigue crack growth under large scale yielding conditions

    NASA Astrophysics Data System (ADS)

    Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann

    2010-07-01

    A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.

  12. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    SciTech Connect

    Smith, William S.; Bull, Jeffrey S.; Wilcox, Trevor; Bos, Randall J.; Shao, Xuan-Min; Goorley, John T.; Costigan, Keeley R.

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.

  13. Large-Scale Atomistic Simulations of Material Failure

    DOE Data Explorer

    Abraham, Farid [IBM Almaden Research; Duchaineau, Mark [LLNL; Wirth, Brian [LLNL; Heidelberg,; Seager, Mark [LLNL; De La Rubia, Diaz [LLNL

    These simulations from 2000 examine the supersonic propagation of cracks and the formation of complex junction structures in metals. Eight simulations concerning brittle fracture, ductile failure, and shockless compression are available.

  14. Shuttle vehicle and mission simulation requirements report, volume 1

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1972-01-01

    The requirements for the space shuttle vehicle and mission simulation are developed to analyze the systems, mission, operations, and interfaces. The requirements are developed according to the following subject areas: (1) mission envelope, (2) orbit flight dynamics, (3) shuttle vehicle systems, (4) external interfaces, (5) crew procedures, (6) crew station, (7) visual cues, and (8) aural cues. Line drawings and diagrams of the space shuttle are included to explain the various systems and components.

  15. WEST-3 wind turbine simulator development. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    This report is a summary description of WEST-3, a new real-time wind turbine simulator developed by Paragon Pacific Inc. WEST-3 is an all digital, fully programmable, high performance parallel processing computer. Contained in the report are descriptions of the WEST-3 hardware and software. WEST-3 consists of a network of Computational Units (CUs) working in parallel. Each CU is a custom designed high speed digital processor operating independently of other CUs. The CU, which is the main building block of the system, is described in some detail. A major contributor to the high performance of the system is the use a unique method for transferring data among the CUs. The software aspects of WEST-3 covered in the report include the preparation of the simulation model (reformulation, scaling and normalization), and the use of the system software (Translator, Linker, Assembler and Loader). Also given is a description of the wind turbine simulation model used in WEST-3, and some sample results from a study conducted to validate the system. Finally, efforts currently underway to enhance the user friendliness of the system are outlined; these include the 32-bit floating point capability, and major improvements in system software.

  16. Mechanically Cooled Large-Volume Germanium Detector Systems for Neclear Explosion Monitoring DOENA27323-2

    SciTech Connect

    Hull, E.L.

    2006-10-30

    Compact maintenance free mechanical cooling systems are being developed to operate large volume high-resolution gamma-ray detectors for field applications. To accomplish this we are utilizing a newly available generation of Stirling-cycle mechanical coolers to operate the very largest volume germanium detectors with no maintenance. The user will be able to leave these systems unplugged on the shelf until needed. The maintenance-free operating lifetime of these detector systems will exceed 5 years. Three important factors affect the operation of mechanically cooled germanium detectors: temperature, vacuum, and vibration. These factors will be studied in the laboratory at the most fundamental levels to insure a solid understanding of the physical limitations each factor places on a practical mechanically cooled germanium detector system. Using this knowledge, mechanically cooled germanium detector prototype systems will be designed and fabricated.

  17. The complex aerodynamic footprint of desert locusts revealed by large-volume tomographic particle image velocimetry.

    PubMed

    Henningsson, Per; Michaelis, Dirk; Nakata, Toshiyuki; Schanz, Daniel; Geisler, Reinhard; Schröder, Andreas; Bomphrey, Richard J

    2015-07-06

    Particle image velocimetry has been the preferred experimental technique with which to study the aerodynamics of animal flight for over a decade. In that time, hardware has become more accessible and the software has progressed from the acquisition of planes through the flow field to the reconstruction of small volumetric measurements. Until now, it has not been possible to capture large volumes that incorporate the full wavelength of the aerodynamic track left behind during a complete wingbeat cycle. Here, we use a unique apparatus to acquire the first instantaneous wake volume of a flying animal's entire wingbeat. We confirm the presence of wake deformation behind desert locusts and quantify the effect of that deformation on estimates of aerodynamic force and the efficiency of lift generation. We present previously undescribed vortex wake phenomena, including entrainment around the wing-tip vortices of a set of secondary vortices borne of Kelvin-Helmholtz instability in the shear layer behind the flapping wings.

  18. Non-contact spectroscopic determination of large blood volume fractions in turbid media

    PubMed Central

    Bremmer, Rolf H.; Kanick, Stephen C.; Laan, Nick; Amelink, Arjen; van Leeuwen, Ton G.; Aalders, Maurice C. G.

    2011-01-01

    We report on a non-contact method to quantitatively determine blood volume fractions in turbid media by reflectance spectroscopy in the VIS/NIR spectral wavelength range. This method will be used for spectral analysis of tissue with large absorption coefficients and assist in age determination of bruises and bloodstains. First, a phantom set was constructed to determine the effective photon path length as a function of μa and μs′ on phantoms with an albedo range: 0.02-0.99. Based on these measurements, an empirical model of the path length was established for phantoms with an albedo > 0.1. Next, this model was validated on whole blood mimicking phantoms, to determine the blood volume fractions ρ = 0.12-0.84 within the phantoms (r = 0.993; error < 10%). Finally, the model was proved applicable on cotton fabric phantoms. PMID:21339884

  19. The use of digital volume tomography in imaging an unusually large composite odontoma in the mandible.

    PubMed

    Bhatavadekar, Neel B; Bouquot, Jerry E

    2009-01-01

    The odontoma is the most common of all odontogenic tumors. Digital volume tomography (DVT) provides a major advantage of decreased radiation and cost-effectiveness, as compared to a conventional computed tomography. There is no known published report utilizing this DVT analysis for assessing and localizing on odontomo. The purpose of this case report was to document the use of digital volume tomography to assess an unusually large composite odontoma in the mondible. Tomographic sections revealed expansion of the buccol cortex and occasional thinning of both the buccol and lingual cortical plates, although there was no pronounced clinically detectable cortical expansion. The sections further demonstrated enomel ond dentin in on irregular mass bearing no morphologic similority to rudimentary teeth. This case highlights the importance of early diagnosis and intervention for treating on odontoma while demonstrating the value of tomographic imaging as on aid to diagnosis.

  20. The complex aerodynamic footprint of desert locusts revealed by large-volume tomographic particle image velocimetry

    PubMed Central

    Henningsson, Per; Michaelis, Dirk; Nakata, Toshiyuki; Schanz, Daniel; Geisler, Reinhard; Schröder, Andreas; Bomphrey, Richard J.

    2015-01-01

    Particle image velocimetry has been the preferred experimental technique with which to study the aerodynamics of animal flight for over a decade. In that time, hardware has become more accessible and the software has progressed from the acquisition of planes through the flow field to the reconstruction of small volumetric measurements. Until now, it has not been possible to capture large volumes that incorporate the full wavelength of the aerodynamic track left behind during a complete wingbeat cycle. Here, we use a unique apparatus to acquire the first instantaneous wake volume of a flying animal's entire wingbeat. We confirm the presence of wake deformation behind desert locusts and quantify the effect of that deformation on estimates of aerodynamic force and the efficiency of lift generation. We present previously undescribed vortex wake phenomena, including entrainment around the wing-tip vortices of a set of secondary vortices borne of Kelvin–Helmholtz instability in the shear layer behind the flapping wings. PMID:26040598

  1. Very Large Area/Volume Microwave ECR Plasma and Ion Source

    NASA Technical Reports Server (NTRS)

    Foster, John E. (Inventor); Patterson, Michael J. (Inventor)

    2009-01-01

    The present invention is an apparatus and method for producing very large area and large volume plasmas. The invention utilizes electron cyclotron resonances in conjunction with permanent magnets to produce dense, uniform plasmas for long life ion thruster applications or for plasma processing applications such as etching, deposition, ion milling and ion implantation. The large area source is at least five times larger than the 12-inch wafers being processed to date. Its rectangular shape makes it easier to accommodate to materials processing than sources that are circular in shape. The source itself represents the largest ECR ion source built to date. It is electrodeless and does not utilize electromagnets to generate the ECR magnetic circuit, nor does it make use of windows.

  2. A volume law for specification of linear channel storage for estimation of large floods

    NASA Astrophysics Data System (ADS)

    Zhang, Shangyou; Cordery, Ian; Sharma, Ashish

    2000-02-01

    A method of estimating large floods using a linear storage-routing approach is presented. The differences between the proposed approach and those traditionally used are (1) that the flood producing properties of basins are represented by a linear system, (2) the storage parameters of the distributed model are determined using a volume law which, unlike other storage-routing models, accounts for the distribution of storage in natural basins, and (3) the basin outflow hydrograph is determined analytically and expressed in a succinct mathematical form. The single model parameter is estimated from observed data without direct fitting, unlike most traditionally used methods. The model was tested by showing it could reproduce observed large floods on a number of basins. This paper compares the proposed approach with a traditionally used storage routing approach using observed flood data from the Hacking River basin in New South Wales, Australia. Results confirm the usefulness of the proposed approach for estimation of large floods.

  3. New Specimen Access Device for the Large Space Simulator

    NASA Astrophysics Data System (ADS)

    Lazzarini, P.; Ratti, F.

    2004-08-01

    The Large Space Simulator (LSS) is used to simulate in- orbit environmental conditions for spacecraft (S/C) testing. The LSS is intended to be a flexible facility: it can accommodate test articles that can differ significantly in shape and weight and carry various instruments. To improve the accessibility to the S/C inside the LSS chamber a new Specimen Access Device (SAD) has been procured. The SAD provides immediate and easy access to the S/C, thus reducing the amount of time necessary for the installations of set-ups in the LSS. The SAD has been designed as bridge crane carrying a basket to move the operator into the LSS. Such a crane moves on parallel rails on the top floor of the LSS building. The SAD is composed by three subsystems: the main bridge, the trolley that moves along the main bridge and the telescopic mast. A trade off analysis has been carried out for what concerns the telescopic mast design. The choice between friction pads vs rollers, to couple the different sections of the mast, has been evaluated. The resulting design makes use of a four sections square mast, with rollers driven deployment. This design has been chosen for the higher stiffness of the mast, due to the limited number of sections, and because it reduces radically the risk of contamination related to a solution based on sliding bushings. Analyses have been performed to assess the mechanical behaviour both in static and in dynamic conditions. In particular the telescopic mast has been studied in detail to optimise its stiffness and to check the safety margins in the various operational conditions. To increase the safety of the operations an anticollision system has been implemented by positioning on the basket two kind of sensors, ultrasonic and contact ones. All the translations are regulated by inverters with acceleration and deceleration ramps controlled by a Programmable Logic Controller (PLC). An absolute encoder is installed on each motor to provide the actual position of the

  4. Large-eddy simulation of unidirectional turbulent flow over dunes

    NASA Astrophysics Data System (ADS)

    Omidyeganeh, Mohammad

    We performed large eddy simulation of the flow over a series of two- and three-dimensional dune geometries at laboratory scale using the Lagrangian dynamic eddy-viscosity subgrid-scale model. First, we studied the flow over a standard 2D transverse dune geometry, then bedform three-dimensionality was imposed. Finally, we investigated the turbulent flow over barchan dunes. The results are validated by comparison with simulations and experiments for the 2D dune case, while the results of the 3D dunes are validated qualitatively against experiments. The flow over transverse dunes separates at the dune crest, generating a shear layer that plays a crucial role in the transport of momentum and energy, as well as the generation of coherent structures. Spanwise vortices are generated in the separated shear; as they are advected, they undergo lateral instabilities and develop into horseshoe-like structures and finally reach the surface. The ejection that occurs between the legs of the vortex creates the upwelling and downdrafting events on the free surface known as "boils". The three-dimensional separation of flow at the crestline alters the distribution of wall pressure, which may cause secondary flow across the stream. The mean flow is characterized by a pair of counter-rotating streamwise vortices, with core radii of the order of the flow depth. Staggering the crestlines alters the secondary motion; two pairs of streamwise vortices appear (a strong one, centred about the lobe, and a weaker one, coming from the previous dune, centred around the saddle). The flow over barchan dunes presents significant differences to that over transverse dunes. The flow near the bed, upstream of the dune, diverges from the centerline plane; the flow close to the centerline plane separates at the crest and reattaches on the bed. Away from the centerline plane and along the horns, flow separation occurs intermittently. The flow in the separation bubble is routed towards the horns and leaves

  5. Improved engine wall models for Large Eddy Simulation (LES)

    NASA Astrophysics Data System (ADS)

    Plengsaard, Chalearmpol

    Improved wall models for Large Eddy Simulation (LES) are presented in this research. The classical Werner-Wengle (WW) wall shear stress model is used along with near-wall sub-grid scale viscosity. A sub-grid scale turbulent kinetic energy is employed in a model for the eddy viscosity. To gain better heat flux results, a modified classical variable-density wall heat transfer model is also used. Because no experimental wall shear stress results are available in engines, the fully turbulent developed flow in a square duct is chosen to validate the new wall models. The model constants in the new wall models are set to 0.01 and 0.8, respectively and are kept constant throughout the investigation. The resulting time- and spatially-averaged velocity and temperature wall functions from the new wall models match well with the law-of-the-wall experimental data at Re = 50,000. In order to study the effect of hot air impinging walls, jet impingement on a flat plate is also tested with the new wall models. The jet Reynolds number is equal to 21,000 and a fixed jet-to-plate spacing of H/D = 2.0. As predicted by the new wall models, the time-averaged skin friction coefficient agrees well with experimental data, while the computed Nusselt number agrees fairly well when r/D > 2.0. Additionally, the model is validated using experimental data from a Caterpillar engine operated with conventional diesel combustion. Sixteen different operating engine conditions are simulated. The majority of the predicted heat flux results from each thermocouple location follow similar trends when compared with experimental data. The magnitude of peak heat fluxes as predicted by the new wall models is in the range of typical measured values in diesel combustion, while most heat flux results from previous LES wall models are over-predicted. The new wall models generate more accurate predictions and agree better with experimental data.

  6. Two-field Kaehler moduli inflation in large volume moduli stabilization

    SciTech Connect

    Yang, Huan-Xiong; Ma, Hong-Liang E-mail: hlma@mail.ustc.edu.cn

    2008-08-15

    In this paper we present a two-field inflation model, which is distinctive in having a non-canonical kinetic Lagrangian and comes from the large volume approach to the moduli stabilization in flux compactification of type IIB superstring on a Calabi-Yau orientifold with h{sup (1,2)}>h{sup (1,1)}{>=}4. The Kaehler moduli are classified as the volume modulus, heavy moduli and two light moduli. The axion-dilaton, complex structure moduli and all heavy Kaehler moduli including the volume modulus are frozen by a non-perturbatively corrected flux superpotential and the {alpha}{sup '}-corrected Kaehler potential in the large volume limit. The minimum of the scalar potential at which the heavy moduli are stabilized provides the dominant potential energy for the surviving light Kaehler moduli. We consider a simplified case where the axionic components in the light Kaehler moduli are further stabilized at the potential minimum and only the geometrical components are taken as scalar fields to drive an assisted-like inflation. For a certain range of moduli stabilization parameters and inflation initial conditions, we obtain a nearly flat power spectrum of the curvature perturbation, with n{sub s} Almost-Equal-To 0.96 at Hubble exit, and an inflationary energy scale of 3 Multiplication-Sign 10{sup 14} GeV. In our model, there is significant correlation between the curvature and isocurvature perturbations on super-Hubble scales, so at the end of inflation a great deal of the curvature power spectrum originates from this correlation.

  7. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.; Vidoni, T. J.

    1991-01-01

    The main objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. In the efforts related to LES, we were concerned with developing reliable subgrid closures for modeling of the fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we focused our attention to further investigation of the effects of exothermicity in compressible turbulent flows. In our previous work, in the first year of this research, we have considered only 'simple' flows. Currently, we are in the process of extending our analyses for the purpose of modeling more practical flows of current interest at LaRC. A summary of our accomplishments during the third six months of the research is presented.

  8. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The basic objective of this research is to extend the capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. In the efforts related to LES, we were primarily involved with assessing the performance of the various modern methods based on the Probability Density Function (PDF) methods for providing closures for treating the subgrid fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we concentrated on understanding some of the relevant physics of compressible reacting flows by means of statistical analysis of the data generated by DNS of such flows. In the research conducted in the second year of this program, our efforts focused on the modeling of homogeneous compressible turbulent flows by PDF methods, and on DNS of non-equilibrium reacting high speed mixing layers. Some preliminary work is also in progress on PDF modeling of shear flows, and also on LES of such flows.

  9. Refinement of a mesoscale model for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Gasset, Nicolas

    With the advent of wind energy technology, several methods have become mature and are seen today as standard for predicting and forecasting the wind. However, their results are still site dependent, and the increasing sizes of both modern wind turbines and wind farms tackle limits of existing methods. Some triggered processes extend to the junction between microscales and mesoscales.The main objectives of this thesis are thus to identify, implement and evaluate an approach allowing for microscale and mesoscale ABL flow modelling considering the various challenges of modern wind energy applications. A literature review of ABL flow modelling from microscales to mesoscales first provides an overview of the specificities and abilities of existing methods. The combined mesoscale/large eddy simulation (LES) modelling appears to be the most promising approach, and the Compressible Community Mesoscale Model (MC2) is elected as the basis of the method in which the components required for LES are added and implemented. A detailed description of the mathematical model and the numerical aspects of the various components of the LES-capable MC2 are then presented so that a complete view of the proposed approach along with the specificities of its implementation are provided. This further allows to introduce the enhancements and new components of the method (separation of volumetric and deviatoric Reynolds tensor terms, vertical staggering, subgrid scale models, 3D turbulent diffusion, 3D turbulent kinetic energy equation), as well as the adaptation of its operating mode to allow for LES (initialization, large scale geostrophic forcing, surface and lateral boundaries). Finally, fundamental aspects and new components of the proposed approach are evaluated based on theoretical 1D Ekman boundary layer and 3D unsteady shear and buoyancy driven homogeneous surface full ABL cases. The model behaviour at high resolution as well as the components required for LES in MC2 are all finely

  10. Shuttle mission simulator. Volume 2: Requirement report, volume 2, revision C

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The requirements for space shuttle simulation which are discussed include: general requirements, program management, system engineering, design and development, crew stations, on-board computers, and systems integration. For Vol. 1, revision A see N73-22203, for Vol 2, revision A see N73-22204.

  11. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  12. A scalable messaging system for accelerating discovery from large scale scientific simulations

    SciTech Connect

    Jin, Tong; Zhang, Fan; Parashar, Manish; Klasky, Scott A; Podhorszki, Norbert; Abbasi, Hasan

    2012-01-01

    Emerging scientific and engineering simulations running at scale on leadership-class High End Computing (HEC) environments are producing large volumes of data, which has to be transported and analyzed before any insights can result from these simulations. The complexity and cost (in terms of time and energy) associated with managing and analyzing this data have become significant challenges, and are limiting the impact of these simulations. Recently, data-staging approaches along with in-situ and in-transit analytics have been proposed to address these challenges by offloading I/O and/or moving data processing closer to the data. However, scientists continue to be overwhelmed by the large data volumes and data rates. In this paper we address this latter challenge. Specifically, we propose a highly scalable and low-overhead associative messaging framework that runs on the data staging resources within the HEC platform, and builds on the staging-based online in-situ/in- transit analytics to provide publish/subscribe/notification-type messaging patterns to the scientist. Rather than having to ingest and inspect the data volumes, this messaging system allows scientists to (1) dynamically subscribe to data events of interest, e.g., simple data values or a complex function or simple reduction (max()/min()/avg()) of the data values in a certain region of the application domain is greater/less than a threshold value, or certain spatial/temporal data features or data patterns are detected; (2) define customized in-situ/in-transit actions that are triggered based on the events, such as data visualization or transformation; and (3) get notified when these events occur. The key contribution of this paper is a design and implementation that can support such a messaging abstraction at scale on high- end computing (HEC) systems with minimal overheads. We have implemented and deployed the messaging system on the Jaguar Cray XK6 machines at Oak Ridge National Laboratory and the

  13. A Novel Technique for Endovascular Removal of Large Volume Right Atrial Tumor Thrombus.

    PubMed

    Nickel, Barbara; McClure, Timothy; Moriarty, John

    2015-08-01

    Venous thromboembolic disease is a significant cause of morbidity and mortality, particularly in the setting of large volume pulmonary embolism. Thrombolytic therapy has been shown to be a successful treatment modality; however, its use somewhat limited due to the risk of hemorrhage and potential for distal embolization in the setting of large mobile thrombi. In patients where either thrombolysis is contraindicated or unsuccessful, and conventional therapies prove inadequate, surgical thrombectomy may be considered. We present a case of percutaneous endovascular extraction of a large mobile mass extending from the inferior vena cava into the right atrium using the Angiovac device, a venovenous bypass system designed for high-volume aspiration of undesired endovascular material. Standard endovascular methods for removal of cancer-associated thrombus, such as catheter-directed lysis, maceration, and exclusion, may prove inadequate in the setting of underlying tumor thrombus. Where conventional endovascular methods either fail or are unsuitable, endovascular thrombectomy with the Angiovac device may be a useful and safe minimally invasive alternative to open resection.

  14. A Novel Technique for Endovascular Removal of Large Volume Right Atrial Tumor Thrombus

    SciTech Connect

    Nickel, Barbara; McClure, Timothy Moriarty, John

    2015-08-15

    Venous thromboembolic disease is a significant cause of morbidity and mortality, particularly in the setting of large volume pulmonary embolism. Thrombolytic therapy has been shown to be a successful treatment modality; however, its use somewhat limited due to the risk of hemorrhage and potential for distal embolization in the setting of large mobile thrombi. In patients where either thrombolysis is contraindicated or unsuccessful, and conventional therapies prove inadequate, surgical thrombectomy may be considered. We present a case of percutaneous endovascular extraction of a large mobile mass extending from the inferior vena cava into the right atrium using the Angiovac device, a venovenous bypass system designed for high-volume aspiration of undesired endovascular material. Standard endovascular methods for removal of cancer-associated thrombus, such as catheter-directed lysis, maceration, and exclusion, may prove inadequate in the setting of underlying tumor thrombus. Where conventional endovascular methods either fail or are unsuitable, endovascular thrombectomy with the Angiovac device may be a useful and safe minimally invasive alternative to open resection.

  15. Numerical aerodynamic simulation facility preliminary study, volume 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A technology forecast was established for the 1980-1985 time frame and the appropriateness of various logic and memory technologies for the design of the numerical aerodynamic simulation facility was assessed. Flow models and their characteristics were analyzed and matched against candidate processor architecture. Metrics were established for the total facility, and housing and support requirements of the facility were identified. An overview of the system is presented, with emphasis on the hardware of the Navier-Stokes solver, which is the key element of the system. Software elements of the system are also discussed.

  16. Large-eddy simulation of particle-laden atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Ilie, Marcel; Smith, Stefan Llewellyn

    2008-11-01

    Pollen dispersion in the atmospheric boundary layer (ABL) is numerically investigated using a hybrid large-eddy simulation (LES) Lagrangian approach. Interest in prediction of pollen dispersion stems from two reasons, the allergens in the pollen grains and increasing genetic manipulation of plants leading to the problem of cross pollination. An efficient Eulerian-Lagrangian particle dispersion algorithm for the prediction of pollen dispersion in the atmospheric boundary layer is outlined. The volume fraction of the dispersed phase is assumed to be small enough such that particle-particle collisions are negligible and properties of the carrier flow are not modified. Only the effect of turbulence on particle motion has to be taken into account (one-way coupling). Hence the continuous phase can be treated separate from the particulate phase. The continuous phase is determined by LES in the Eulerian frame of reference whereas the dispersed phase is simulated in a Lagrangian frame of reference. Numerical investigations are conducted for the convective, neutral and stable boundary layer as well different topographies. The results of the present study indicate that particles with small diameter size follow the flow streamlines, behaving as tracers, while particles with large diameter size tend to follow trajectories which are independent of the flow streamlines. Particles of ellipsoidal shape travel faster than the ones of spherical shape.

  17. Characteristics of the mixing volume model with the interactions among spatially distributed particles for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2016-11-01

    The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.

  18. Earth resources mission performance studies. Volume 2: Simulation results

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Simulations were made at three month intervals to investigate the EOS mission performance over the four seasons of the year. The basic objectives of the study were: (1) to evaluate the ability of an EOS type system to meet a representative set of specific collection requirements, and (2) to understand the capabilities and limitations of the EOS that influence the system's ability to satisfy certain collection objectives. Although the results were obtained from a consideration of a two sensor EOS system, the analysis can be applied to any remote sensing system having similar optical and operational characteristics. While the category related results are applicable only to the specified requirement configuration, the results relating to general capability and limitations of the sensors can be applied in extrapolating to other U.S. based EOS collection requirements. The TRW general purpose mission simulator and analytic techniques discussed in this report can be applied to a wide range of collection and planning problems of earth orbiting imaging systems.

  19. Large-eddy simulation of bubble-driven plume in stably stratified flow.

    NASA Astrophysics Data System (ADS)

    Yang, Di; Chen, Bicheng; Socolofsky, Scott; Chamecki, Marcelo; Meneveau, Charles

    2015-11-01

    The interaction between a bubble-driven plume and stratified water column plays a vital role in many environmental and engineering applications. As the bubbles are released from a localized source, they induce a positive buoyancy flux that generates an upward plume. As the plume rises, it entrains ambient water, and when the plume rises to a higher elevation where the stratification-induced negative buoyancy is sufficient, a considerable fraction of the entrained fluid detrains, or peels, to form a downward outer plume and a lateral intrusion layer. In the case of multiphase plumes, the intrusion layer may also trap weakly buoyant particles (e.g., oil droplets in the case of a subsea accidental blowout). In this study, the complex plume dynamics is studied using large-eddy simulation (LES), with the flow field simulated by hybrid pseudospectral/finite-difference scheme, and the bubble and dye concentration fields simulated by finite-volume scheme. The spatial and temporal characteristics of the buoyant plume are studied, with a focus on the effects of different bubble buoyancy levels. The LES data provide useful mean plume statistics for evaluating the accuracy of 1-D engineering models for entrainment and peeling fluxes. Based on the insights learned from the LES, a new continuous peeling model is developed and tested. Study supported by the Gulf of Mexico Research Initiative (GoMRI).

  20. Cryogenic loading of large volume presses for high-pressure experimentation and synthesis of novel materials

    SciTech Connect

    Lipp, M J; Evans, W J; Yoo, C S

    2005-01-21

    We present an efficient easily implemented method for loading cryogenic fluids in a large volume press. We specifically apply this method to the high-pressure synthesis of an extended solid derived from CO using a Paris-Edinburgh cell. This method employs cryogenic cooling of Bridgman type WC anvils well insulated from other press components, condensation of the load gas within a brass annulus surrounding the gasket between the Bridgman anvils. We demonstrate the viability of the described approach by synthesizing macroscopic amounts (several milligrams) of polymeric CO-derived material, which were recovered to ambient conditions after compression of pure CO to 5 GPa or above.

  1. Incarceration of umbilical hernia: a rare complication of large volume paracentesis.

    PubMed

    Khodarahmi, Iman; Shahid, Muhammad Usman; Contractor, Sohail

    2015-09-01

    We present two cases of umbilical hernia incarceration following large volume paracentesis (LVP) in patients with cirrhotic ascites. Both patients became symptomatic within 48 hours after the LVP. Although being rare, given the significantly higher mortality rate of cirrhotic patients undergoing emergent herniorrhaphy, this complication of LVP is potentially serious. Therefore, it is recommended that patients be examined closely for the presence of umbilical hernias before removal of ascitic fluid and an attempt should be made for external reduction of easily reducible hernias, if a hernia is present.

  2. Capillary gas chromatographic analysis of nerve agents using large volume injections.

    PubMed

    Degenhardt-Langelaan, C E; Kientz, C E

    1996-02-02

    The use of large volume injections has been studied for the verification of intact organophosphorus chemical warfare agents in water samples. As the use of ethyl acetate caused severe detection problems new potential solvents were evaluated. With the developed procedure, the nerve agents sarin, tabun, soman, DFP and VX can be determined in freshly prepared water samples at ppt levels. Except for the nerve agent tabun all other agents added to the water samples were still present after 8 days at 20-60% levels, if the pH of the water sample is adjusted to ca. 5 shortly after sampling and adjusted to pH 7 for analysis.

  3. Incarceration of umbilical hernia: a rare complication of large volume paracentesis

    PubMed Central

    Khodarahmi, Iman; Shahid, Muhammad Usman; Contractor, Sohail

    2015-01-01

    We present two cases of umbilical hernia incarceration following large volume paracentesis (LVP) in patients with cirrhotic ascites. Both patients became symptomatic within 48 hours after the LVP. Although being rare, given the significantly higher mortality rate of cirrhotic patients undergoing emergent herniorrhaphy, this complication of LVP is potentially serious. Therefore, it is recommended that patients be examined closely for the presence of umbilical hernias before removal of ascitic fluid and an attempt should be made for external reduction of easily reducible hernias, if a hernia is present. PMID:26629305

  4. Large Volume, Optical and Opto-Mechanical Metrology Techniques for ISIM on JWST

    NASA Technical Reports Server (NTRS)

    Hadjimichael, Theo

    2015-01-01

    The final, flight build of the Integrated Science Instrument Module (ISIM) element of the James Webb Space Telescope is the culmination of years of work across many disciplines and partners. This paper covers the large volume, ambient, optical and opto-mechanical metrology techniques used to verify the mechanical integration of the flight instruments in ISIM, including optical pupil alignment. We present an overview of ISIM's integration and test program, which is in progress, with an emphasis on alignment and optical performance verification. This work is performed at NASA Goddard Space Flight Center, in close collaboration with the European Space Agency, the Canadian Space Agency, and the Mid-Infrared Instrument European Consortium.

  5. GMP Cryopreservation of Large Volumes of Cells for Regenerative Medicine: Active Control of the Freezing Process

    PubMed Central

    Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Gibbons, Stephanie; Morris, G. John

    2014-01-01

    Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to −60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze—viabilities at 93.4%±7.4%, viable cell numbers at 14.3±1.7 million nuclei/mL alginate, and protein secretion at 10.5±1.7

  6. Alginate Hydrogel Microencapsulation Inhibits Devitrification and Enables Large-Volume Low-CPA Cell Vitrification.

    PubMed

    Huang, Haishui; Choi, Jung Kyu; Rao, Wei; Zhao, Shuting; Agarwal, Pranay; Zhao, Gang; He, Xiaoming

    2015-11-25

    Cryopreservation of stem cells is important to meet their ever-increasing demand by the burgeoning cell-based medicine. The conventional slow freezing for stem cell cryopreservation suffers from inevitable cell injury associated with ice formation and the vitrification (i.e., no visible ice formation) approach is emerging as a new strategy for cell cryopreservation. A major challenge to cell vitrification is intracellular ice formation (IIF, a lethal event to cells) induced by devitrification (i.e., formation of visible ice in previously vitrified solution) during warming the vitrified cells at cryogenic temperature back to super-zero temperatures. Consequently, high and toxic concentrations of penetrating cryoprotectants (i.e., high CPAs, up to ~8 M) and/or limited sample volumes (up to ~2.5 μl) have been used to minimize IIF during vitrification. We reveal that alginate hydrogel microencapsulation can effectively inhibit devitrification during warming. Our data show that if ice formation were minimized during cooling, IIF is negligible in alginate hydrogel-microencapsulated cells during the entire cooling and warming procedure of vitrification. This enables vitrification of pluripotent and multipotent stem cells with up to ~4 times lower concentration of penetrating CPAs (up to 2 M, low CPA) in up to ~100 times larger sample volume (up to ~250 μl, large volume).

  7. A scale down process for the development of large volume cryopreservation☆

    PubMed Central

    Kilbride, Peter; Morris, G. John; Milne, Stuart; Fuller, Barry; Skepper, Jeremy; Selden, Clare

    2014-01-01

    The process of ice formation and propagation during cryopreservation impacts on the post-thaw outcome for a sample. Two processes, either network solidification or progressive solidification, can dominate the water–ice phase transition with network solidification typically present in small sample cryo-straws or cryo-vials. Progressive solidification is more often observed in larger volumes or environmental freezing. These different ice phase progressions could have a significant impact on cryopreservation in scale-up and larger volume cryo-banking protocols necessitating their study when considering cell therapy applications. This study determines the impact of these different processes on alginate encapsulated liver spheroids (ELS) as a model system during cryopreservation, and develops a method to replicate these differences in an economical manner. It was found in the current studies that progressive solidification resulted in fewer, but proportionally more viable cells 24 h post-thaw compared with network solidification. The differences between the groups diminished at later time points post-thaw as cells recovered the ability to undertake cell division, with no statistically significant differences seen by either 48 h or 72 h in recovery cultures. Thus progressive solidification itself should not prove a significant hurdle in the search for successful cryopreservation in large volumes. However, some small but significant differences were noted in total viable cell recoveries and functional assessments between samples cooled with either progressive or network solidification, and these require further investigation. PMID:25219980

  8. Alginate Hydrogel Microencapsulation Inhibits Devitrification and Enables Large-Volume Low-CPA Cell Vitrification

    PubMed Central

    Huang, Haishui; Choi, Jung Kyu; Rao, Wei; Zhao, Shuting; Agarwal, Pranay; Zhao, Gang

    2015-01-01

    Cryopreservation of stem cells is important to meet their ever-increasing demand by the burgeoning cell-based medicine. The conventional slow freezing for stem cell cryopreservation suffers from inevitable cell injury associated with ice formation and the vitrification (i.e., no visible ice formation) approach is emerging as a new strategy for cell cryopreservation. A major challenge to cell vitrification is intracellular ice formation (IIF, a lethal event to cells) induced by devitrification (i.e., formation of visible ice in previously vitrified solution) during warming the vitrified cells at cryogenic temperature back to super-zero temperatures. Consequently, high and toxic concentrations of penetrating cryoprotectants (i.e., high CPAs, up to ~8 M) and/or limited sample volumes (up to ~2.5 μl) have been used to minimize IIF during vitrification. We reveal that alginate hydrogel microencapsulation can effectively inhibit devitrification during warming. Our data show that if ice formation were minimized during cooling, IIF is negligible in alginate hydrogel-microencapsulated cells during the entire cooling and warming procedure of vitrification. This enables vitrification of pluripotent and multipotent stem cells with up to ~4 times lower concentration of penetrating CPAs (up to 2 M, low CPA) in up to ~100 times larger sample volume (up to ~250 μl, large volume). PMID:26640426

  9. Multi-Rate Digital Control Systems with Simulation Applications. Volume II. Computer Algorithms

    DTIC Science & Technology

    1980-09-01

    34 ~AFWAL-TR-80-31 01 • • Volume II L IL MULTI-RATE DIGITAL CONTROL SYSTEMS WITH SIMULATiON APPLICATIONS Volume II: Computer Algorithms DENNIS G. J...29 Ma -8 - Volume II. Computer Algorithms ~ / ’+ 44MWLxkQT N Uwe ~~ 4 ~jjskYIF336l5-79-C-369~ 9. PER~rORMING ORGANIZATION NAME AND ADDRESS IPROG AMEL...additional options. The analytical basis for the computer algorithms is discussed in Ref. 12. However, to provide a complete description of the program, some

  10. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.

    1993-01-01

    The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.

  11. Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-06-14

    This work studies the performance and scalability characteristics of"hybrid" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  12. MPI-hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-03-20

    This work studies the performance and scalability characteristics of"hybrid'" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  13. Effectiveness of nuclear interceptors against large single volume chemical/biological warheads

    SciTech Connect

    Mendelsohn, E.

    1993-01-01

    In a continuing series of calculations which explore potential nuclear defenses against chemical and/or bacteriological warheads the author has now completed a study in which he postulated a large canister geometry. Instead of looking at a collection of smaller submunitions as done previously, he has now one single large volume of Sarin (a nerve agent). This is a more stressing case for nuclear defense, in that neutrons must traverse a long path in the hydrogenous solution if they are to deposit their energy in the region of Sarin farthest from the source. The author presents results from Monte Carlo calculations which indicate that differences in energy deposition between Sarin regions close to the source and those farthest from the source have increased very significantly.

  14. Hybrid Parallelism for Volume Rendering on Large, Multi- and Many-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2011-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells. The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.

  15. Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    NASA Astrophysics Data System (ADS)

    Howison, M.; Bethel, E. W.; Childs, H.

    2011-10-01

    This work studies the performance and scalability characteristics of "hybrid" parallel programming and execution as applied to raycasting volume rendering - a staple visualization algorithm - on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today, as well as processors capable of running hundreds of concurrent threads (GPUs), we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  16. Large volume leukapheresis maximizes the progenitor cell yield for allogeneic peripheral blood progenitor donation.

    PubMed

    Kobbe, G; Soehngen, D; Heyll, A; Fischer, J; Thiele, K P; Aul, C; Wernet, P

    1997-04-01

    We have investigated the efficiency and safety of large volume leukapheresis (LVL) for the collection of granulocyte colony-stimulating factor (G-CSF)-mobilized peripheral blood progenitor cells (PBPCs) from healthy donors. In six apheresis sessions in four healthy individuals on a COBE-BCT Spectra cell separator (median processed volume 3.5 X total blood volume, TBV, range 3.3-4.4 X TBV), harvested cells were collected sequentially into three single bags. The collection bags were changed after processing 33%, 66%, and 100% of the prospective apheresis volume, allowing analysis of PBPCs collected at different periods during one harvest. Mononuclear cells (MNCs), CD34+ cells, CD34+ subsets, and lymphocyte subsets were determined in each bag. Substantially more PBPCs were harvested than were in the circulation before G-CSF administration preceding LVL (median 171%, range 69-267%), reflecting progenitor release during the procedure. In donors 1 and 3, the CD34+ cell yields decreased in the third bag to 53% and 42% of that collected in the first bag, whereas the progenitor cell yields in donors 2 and 4 were stable or rose during the procedure, achieving in the third bag 157% and 105% of the number of CD34+ cells collected in the first bag. Minor changes were found in the subsets of CD34+ cells, lymphocytes, and monocytes collected at different periods during a single harvest. LVL was well tolerated. Reversible thombocytopenia developed in all cases. No late effects attributable to LVL or G-CSF were found in the 4 donors and 16 other healthy individuals who have undergone LVL in our institution. We conclude that LVL is safe and maximizes PBPC yields for allogeneic transplantation.

  17. High-Accuracy Near-Surface Large-Eddy Simulation with Planar Topography

    DTIC Science & Technology

    2015-08-03

    SECURITY CLASSIFICATION OF: Large-eddy simulation (LES) has been plagued by an inability to predict the law-of-the-wall (LOTW) in mean velocity in the...Simulation with Planar Topography” Report Title Large-eddy simulation (LES) has been plagued by an inability to predict the law-of-the-wall (LOTW) in mean

  18. Large liquid rocket engine transient performance simulation system

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Southwick, R. D.

    1991-01-01

    A simulation system, ROCETS, was designed and developed to allow cost-effective computer predictions of liquid rocket engine transient performance. The system allows a user to generate a simulation of any rocket engine configuration using component modules stored in a library through high-level input commands. The system library currently contains 24 component modules, 57 sub-modules and maps, and 33 system routines and utilities. FORTRAN models from other sources can be operated in the system upon inclusion of interface information on comment cards. Operation of the simulation is simplified for the user by run, execution, and output processors. The simulation system makes available steady-state trim balance, transient operation, and linear partial generation. The system utilizes a modern equation solver for efficient operation of the simulations. Transient integration methods include integral and differential forms for the trapezoidal, first order Gear, and second order Gear corrector equations. A detailed technology test bed engine (TTBE) model was generated to be used as the acceptance test of the simulation system. The general level of model detail was that reflected in the Space Shuttle Main Engine DTM. The model successfully obtained steady-state balance in main stage operation and simulated throttle transients, including engine starts and shutdown. A NASA FORTRAN control model was obtained, ROCETS interface installed in comment cards, and operated with the TTBE model in closed-loop transient mode.

  19. Mechanically Cooled Large-Volume Germanium Detector Systems for Nuclear Explosion Monitoring

    SciTech Connect

    Hull, Ethan L.; Pehl, Richard H.; Lathrop, James R.; Martin, Gregory N.; Mashburn, R. B.; Miley, Harry S.; Aalseth, Craig E.; Hossbach, Todd W.; Bowyer, Ted W.

    2006-09-21

    Compact maintenance free mechanical cooling systems are being developed to operate large volume (~570 cm3, ~3 kg, 140% or larger) germanium detectors for field applications. We are using a new generation of Stirling-cycle mechanical coolers for operating the very largest volume germanium detectors with absolutely no maintenance or liquid nitrogen requirements. The user will be able to leave these systems unplugged on the shelf until needed. The flip of a switch will bring a system to life in ~1 hour for measurements. The maintenance-free operating lifetime of these detector systems will exceed five years. These features are necessary for remote long-duration liquid-nitrogen free deployment of large-volume germanium gamma-ray detector systems for Nuclear Explosion Monitoring (NEM). The Radionuclide Aerosol Sampler/Analyzer (RASA) will greatly benefit from the availability of such detectors by eliminating the need for liquid nitrogen at RASA sites while still allowing the very largest available germanium detectors to be utilized. These mechanically cooled germanium detector systems being developed here will provide the largest, most sensitive detectors possible for use with the RASA. To provide such systems, the appropriate technical fundamentals are being researched. Mechanical cooling of germanium detectors has historically been a difficult endeavor. The success or failure of mechanically cooled germanium detectors stems from three main technical issues: temperature, vacuum, and vibration. These factors affect one another. There is a particularly crucial relationship between vacuum and temperature. These factors will be experimentally studied both separately and together to insure a solid understanding of the physical limitations each factor places on a practical mechanically cooled germanium detector system for field use. Using this knowledge, a series of mechanically cooled germanium detector prototype systems are being designed and fabricated. Our collaborators

  20. Geochemical correlation of three large-volume ignimbrites from the Yellowstone hotspot track, Idaho, USA

    NASA Astrophysics Data System (ADS)

    Ellis, Ben S.; Branney, M. J.; Barry, T. L.; Barfod, D.; Bindeman, I.; Wolff, J. A.; Bonnichsen, B.

    2012-01-01

    Three voluminous rhyolitic ignimbrites have been identified along the southern margin of the central Snake River Plain. As a result of wide-scale correlations, new volume estimates can be made for these deposits: ~350 km3 for the Steer Basin Tuff and Cougar Point Tuff XI, and ~1,000 km3 for Cougar Point Tuff XIII. These volumes exclude any associated regional ashfalls and correlation across to the north side of the plain, which has yet to be attempted. Each correlation was achieved using a combination of methods including field logging, whole rock and mineral chemistry, magnetic polarity, oxygen isotope signature and high-precision 40Ar/39Ar geochronology. The Steer Basin Tuff, Cougar Point Tuff XI and Cougar Point Tuff XIII have deposit characteristics typical of `Snake River (SR)-type' volcanism: they are very dense, intensely welded and rheomorphic, unusually well sorted with scarce pumice and lithic lapilli. These features differ significantly from those of deposits from the better-known younger eruptions of Yellowstone. The ignimbrites also exhibit marked depletion in δ18O, which is known to characterise the SR-type rhyolites of the central Snake River Plain, and cumulatively represent ~1,700 km3 of low δ18O rhyolitic magma (feldspar values 2.3-2.9‰) erupted within 800,000 years. Our work reduces the total number of ignimbrites recognised in the central Snake River Plain by 6, improves the link with the ashfall record of Yellowstone hotspot volcanism and suggests that more large-volume ignimbrites await discovery through detailed correlation work amidst the vast ignimbrite record of volcanism in this bimodal large igneous province.

  1. Clinical, biochemical, and hormonal changes after a single, large-volume paracentesis in cirrhosis with ascites.

    PubMed

    Gentile, S; Angelico, M; Bologna, E; Capocaccia, L

    1989-03-01

    The use of paracentesis has recently been reproposed as a safe and effective alternative to diuretics for management of ascites. We have investigated the clinical and biochemical effects of large-volume paracentesis in 19 cirrhotics with tense ascites, and the relative changes in the hormones involved in sodium and water renal handling. Plasma renin activity (PRA), aldosterone (PA), and arginine vasopressin (AVP) levels and conventional liver and renal function tests were measured before and after 1, 2, and 7 days after the paracentesis. No complications were observed, but patients regained 37% of the weight lost after 1 wk. Percent weight regained was significantly and directly correlated with PA concentration measured before the paracentesis. No changes were recorded after paracentesis in biochemical and clinical data, except for a significant drop in diastolic blood pressure. No changes in AVP levels were observed. A significant increase in PA occurred after paracentesis, with a maximum peak after 48 h. The increase in PA was not accompanied by changes in PRA, but was associated with a reduction of urinary sodium excretion. A relevant fraction of body aldosterone was confined to the ascitic fluid. We conclude that the clinical results of a large-volume paracentesis can be predicted in part on the basis of PA measurement, and that removal of ascites is followed by an increase of PA of uncertain origin and effectiveness.

  2. Specific detection of DNA using quantum dots and magnetic beads for large volume samples

    SciTech Connect

    Kim, Yeon S.; Kim, Byoung CHAN; Lee, Jin Hyung; Kim, Jungbae; Gu, Man Bock

    2006-10-01

    Here we present a sensitive DNA detection protocol using quantum dots (QDs) and magnetic beads (MBs) for large volume samples. In this study, QDs, conjugated with streptavidin, were used to produce fluorescent signals while magnetic beads (MBs) were used to isolate and concentrate the signals. The presence of target DNAs lead to the sandwich hybridization between the functionalized QDs, the target DNAs and the MBs. In fact, the QDs-MBs complex, which is bound using the target DNA, can be isolated and then concentrated. The binding of the QDs to the surface of the MBs was confirmed by confocal microscopy and Cd elemental analysis. It was found that the fluorescent intensity was proportional to concentration of the target DNA, while the presence of noncomplementary DNA produced no significant fluorescent signal. In addition, the presence of low copies of target DNAs such as 0.5 pM in large volume samples up to 40 ml were successfully detected by using a magnet-assisted concentration protocol which consequently results in the enhancement of the sensitivity more than 100-fold.

  3. Colloids Versus Albumin in Large Volume Paracentesis to Prevent Circulatory Dysfunction: Evidence-based Case Report.

    PubMed

    Widjaja, Felix F; Khairan, Paramita; Kamelia, Telly; Hasan, Irsan

    2016-04-01

    Large volume paracentesis may cause paracentesis induced circulatory dysfunction (PICD). Albumin is recommended to prevent this abnormality. Meanwhile, the price of albumin is too expensive and there should be another alternative that may prevent PICD. This report aimed to compare albumin to colloids in preventing PICD. Search strategy was done using PubMed, Scopus, Proquest, dan Academic Health Complete from EBSCO with keywords of "ascites", "albumin", "colloid", "dextran", "hydroxyethyl starch", "gelatin", and "paracentesis induced circulatory dysfunction". Articles was limited to randomized clinical trial and meta-analysis with clinical question of "In hepatic cirrhotic patient undergone large volume paracentesis, whether colloids were similar to albumin to prevent PICD". We found one meta-analysis and four randomized clinical trials (RCT). A meta analysis showed that albumin was still superior of which odds ratio 0.34 (0.23-0.51). Three RCTs showed the same results and one RCT showed albumin was not superior than colloids. We conclude that colloids could not constitute albumin to prevent PICD, but colloids still have a role in patient who undergone paracentesis less than five liters.

  4. Broadband frequency ECR ion source concepts with large resonant plasma volumes

    SciTech Connect

    Alton, G.D.

    1995-12-31

    New techniques are proposed for enhancing the performances of ECR ion sources. The techniques are based on the use of high-power, variable-frequency, multiple-discrete-frequency, or broadband microwave radiation, derived from standard TWT technology, to effect large resonant ``volume`` ECR sources. The creation of a large ECR plasma ``volume`` permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present forms of the ECR ion source. If successful, these developments could significantly impact future accelerator designs and accelerator-based, heavy-ion-research programs by providing multiply-charged ion beams with the energies and intensities required for nuclear physics research from existing ECR ion sources. The methods described in this article can be used to retrofit any ECR ion source predicated on B-minimum plasma confinement techniques.

  5. Simulation of preburner sprays, volumes 1 and 2

    NASA Technical Reports Server (NTRS)

    Hardalupas, Y.; Whitelaw, J. H.

    1993-01-01

    The present study considered characteristics of sprays under a variety of conditions. Control of these sprays is important as the spray details can control both rocket combustion stability and efficiency. Under the present study Imperial College considered the following: (1) Measurement of the size and rate of spread of the sprays produced by single coaxial airblast nozzles with axial gaseous stream. The local size, velocity, and flux characteristics for a wide range of gas and liquid flowrates were measured, and the results were correlated with the conditions of the spray at the nozzle exit. (2) Examination of the effect of the geometry of single coaxial airblast atomizers on spray characteristics. The gas and liquid tube diameters were varied over a range of values, the liquid tube recess was varied, and the shape of the exit of the gaseous jet was varied from straight to converging. (3) Quantification of the effect of swirl in the gaseous stream on the spray characteristics produced by single coaxial airblast nozzles. (4) Quantification of the effect of reatomization by impingement of the spray on a flat disc positioned around 200 mm from the nozzle exit. This models spray impingement on the turbopump dome during the startup process of the preburner of the SSME. (5) Study of the interaction between multiple sprays without and with swirl in their gaseous stream. The spray characteristics of single nozzles were compared with that of three identical nozzles with their axis at a small distance from each other. This study simulates the sprays in the preburner of the SSME, where there are around 260 elements on the faceplate of the combustion chamber. (6) Design an experimental facility to study the characteristics of sprays at high pressure conditions and at supercritical pressure and temperature for the gas but supercritical pressure and subcritical temperature for the liquid.

  6. Efficient Coalescent Simulation and Genealogical Analysis for Large Sample Sizes

    PubMed Central

    Kelleher, Jerome; Etheridge, Alison M; McVean, Gilean

    2016-01-01

    A central challenge in the analysis of genetic variation is to provide realistic genome simulation across millions of samples. Present day coalescent simulations do not scale well, or use approximations that fail to capture important long-range linkage properties. Analysing the results of simulations also presents a substantial challenge, as current methods to store genealogies consume a great deal of space, are slow to parse and do not take advantage of shared structure in correlated trees. We solve these problems by introducing sparse trees and coalescence records as the key units of genealogical analysis. Using these tools, exact simulation of the coalescent with recombination for chromosome-sized regions over hundreds of thousands of samples is possible, and substantially faster than present-day approximate methods. We can also analyse the results orders of magnitude more quickly than with existing methods. PMID:27145223

  7. The Law of Large Numbers and Poker Machine Simulations.

    ERIC Educational Resources Information Center

    Fletcher, Rod

    2000-01-01

    Creates graphs to see how the relative frequency of an event tends to approach the probability of that event as the number of trials increases. Uses a simulation of a poker machine to provide context for this subject. (ASK)

  8. Secure Large-Scale Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Dan (Technical Monitor)

    2001-01-01

    To fully conduct research that will support the far-term concepts, technologies and methods required to improve the safety of Air Transportation a simulation environment of the requisite degree of fidelity must first be in place. The Virtual National Airspace Simulation (VNAS) will provide the underlying infrastructure necessary for such a simulation system. Aerospace-specific knowledge management services such as intelligent data-integration middleware will support the management of information associated with this complex and critically important operational environment. This simulation environment, in conjunction with a distributed network of supercomputers, and high-speed network connections to aircraft, and to Federal Aviation Administration (FAA), airline and other data-sources will provide the capability to continuously monitor and measure operational performance against expected performance. The VNAS will also provide the tools to use this performance baseline to obtain a perspective of what is happening today and of the potential impact of proposed changes before they are introduced into the system.

  9. AUTOMATED PARAMETRIC EXECUTION AND DOCUMENTATION FOR LARGE-SCALE SIMULATIONS

    SciTech Connect

    R. L. KELSEY; ET AL

    2001-03-01

    A language has been created to facilitate the automatic execution of simulations for purposes of enabling parametric study and test and evaluation. Its function is similar in nature to a job-control language, but more capability is provided in that the language extends the notion of literate programming to job control. Interwoven markup tags self document and define the job control process. The language works in tandem with another language used to describe physical systems. Both languages are implemented in the Extensible Markup Language (XML). A user describes a physical system for simulation and then creates a set of instructions for automatic execution of the simulation. Support routines merge the instructions with the physical-system description, execute the simulation the specified number of times, gather the output data, and document the process and output for the user. The language enables the guided exploration of a parameter space and can be used for simulations that must determine optimal solutions to particular problems. It is generalized enough that it can be used with any simulation input files that are described using XML. XML is shown to be useful as a description language, an interchange language, and a self-documented language.

  10. Enrichment of diluted cell populations from large sample volumes using 3D carbon-electrode dielectrophoresis.

    PubMed

    Islam, Monsur; Natu, Rucha; Larraga-Martinez, Maria Fernanda; Martinez-Duarte, Rodrigo

    2016-05-01

    Here, we report on an enrichment protocol using carbon electrode dielectrophoresis to isolate and purify a targeted cell population from sample volumes up to 4 ml. We aim at trapping, washing, and recovering an enriched cell fraction that will facilitate downstream analysis. We used an increasingly diluted sample of yeast, 10(6)-10(2) cells/ml, to demonstrate the isolation and enrichment of few cells at increasing flow rates. A maximum average enrichment of 154.2 ± 23.7 times was achieved when the sample flow rate was 10 μl/min and yeast cells were suspended in low electrically conductive media that maximizes dielectrophoresis trapping. A COMSOL Multiphysics model allowed for the comparison between experimental and simulation results. Discussion is conducted on the discrepancies between such results and how the model can be further improved.

  11. High-rate Plastic Deformation of Nanocrystalline Tantalum to Large Strains: Molecular Dynamics Simulation

    SciTech Connect

    Rudd, R E

    2009-02-05

    Recent advances in the ability to generate extremes of pressure and temperature in dynamic experiments and to probe the response of materials has motivated the need for special materials optimized for those conditions as well as a need for a much deeper understanding of the behavior of materials subjected to high pressure and/or temperature. Of particular importance is the understanding of rate effects at the extremely high rates encountered in those experiments, especially with the next generation of laser drives such as at the National Ignition Facility. Here we use large-scale molecular dynamics (MD) simulations of the high-rate deformation of nanocrystalline tantalum to investigate the processes associated with plastic deformation for strains up to 100%. We use initial atomic configurations that were produced through simulations of solidification in the work of Streitz et al [Phys. Rev. Lett. 96, (2006) 225701]. These 3D polycrystalline systems have typical grain sizes of 10-20 nm. We also study a rapidly quenched liquid (amorphous solid) tantalum. We apply a constant volume (isochoric), constant temperature (isothermal) shear deformation over a range of strain rates, and compute the resulting stress-strain curves to large strains for both uniaxial and biaxial compression. We study the rate dependence and identify plastic deformation mechanisms. The identification of the mechanisms is facilitated through a novel technique that computes the local grain orientation, returning it as a quaternion for each atom. This analysis technique is robust and fast, and has been used to compute the orientations on the fly during our parallel MD simulations on supercomputers. We find both dislocation and twinning processes are important, and they interact in the weak strain hardening in these extremely fine-grained microstructures.

  12. Large eddy simulations as a parameterization tool for canopy-structure X VOC-flux interactions

    NASA Astrophysics Data System (ADS)

    Kenny, William; Bohrer, Gil; Chatziefstratiou, Efthalia

    2015-04-01

    We have been working to develop a new post-processing model - High resolution VOC Atmospheric Chemistry in Canopies (Hi-VACC) - which resolves the dispersion and chemistry of reacting chemical species given their emission rates from the vegetation and soil, driven by high resolution meteorological forcing and wind fields from various high resolution atmospheric regional and large-eddy simulations. Hi-VACC reads in fields of pressure, temperature, humidity, air density, short-wave radiation, wind (3-D u, v and w components) and sub-grid-scale turbulence that were simulated by a high resolution atmospheric model. This meteorological forcing data is provided as snapshots of 3-D fields. We have tested it using a number of RAMS-based Forest Large Eddy Simulation (RAFLES) runs. This can then be used for parameterization of the effects of canopy structure on VOC fluxes. RAFLES represents both drag and volume restriction by the canopy over an explicit 3-D domain. We have used these features to show the effects of canopy structure on fluxes of momentum, heat, and water in heterogeneous environments at the tree-crown scale by modifying the canopy structure representing it as both homogeneous and realistically heterogeneous. We combine this with Hi-VACC's capabilities to model dispersion and chemistry of reactive VOCs to parameterize the fluxes of these reactive species with respect to canopy structure. The high resolution capabilities of Hi-VACC coupled with RAFLES allows for sensitivity analysis to determine important structural considerations in sub-grid-scale parameterization of these phenomena in larger models.

  13. Pathways of deep cyclones associated with large volume changes (LVCs) and major Baltic inflows (MBIs)

    NASA Astrophysics Data System (ADS)

    Lehmann, Andreas; Höflich, Katharina; Post, Piia; Myrberg, Kai

    2017-03-01

    Large volume changes (LVCs) and major Baltic inflows (MBIs) are essential processes for the water exchange and renewal of the stagnant water in the Baltic Sea deep basins. These strong inflows are known to be forced by persistent westerly wind conditions. In this study, MBIs are considered as subset of LVCs transporting with the large water volume a big amount of highly saline and oxygenated water into the Baltic Sea. Since the early 1980s the frequency of MBIs has dropped drastically from 5 to 7 events to only one inflow per decade, and long lasting periods without MBIs became the usual state. Only in January 1993, 2003 and December 2014 MBIs occurred that were able to interrupt the stagnation periods in the deep basins of the Baltic Sea. However, in spite of the decreasing frequency of MBIs, there is no obvious decrease of LVCs. The Landsort sea level is known to reflect the mean sea level of the Baltic Sea very well, and hence LVCs have been calculated for the period 1887-2015 filtering daily time series of Landsort sea surface elevation anomalies. The cases with local minimum and maximum difference resulting in at least 60 km3 of water volume change excluding the volume change due to runoff have been chosen for a closer study (1948-2013) of characteristic pathways of deep cyclones. The average duration of LVCs is about 40 days. During this time, 5-6 deep cyclones move along characteristic storm tracks. Furthermore, MBIs are characterized by even higher cyclonic activity compared to average LVCs. We obtained four main routes of deep cyclones which were associated with LVCs, but also with the climatology. One is approaching from the west at about 56-60°N, passing the northern North Sea, northern Denmark, Sweden and the Island of Gotland. A second broad corridor of frequent cyclone pathways enters the study area north of Scotland between 60 and 66°N turning north-eastwards along the northern coast of Scandinavia. This branch bifurcates into smaller routes. One

  14. Fast surface and volume rendering based on shear-warp factorization for a surgical simulator.

    PubMed

    Kim, Keun Ho; Kwon, Min Jeong; Kwon, Sung Min; Ra, Jong Beom; Park, HyunWook

    2002-01-01

    Fast simultaneous visualization of 3D medical images and medical instruments is necessary for a surgical simulator. Because unconstrained motion of a medical instrument is more frequent than that of the patient, the visualization of medical instruments is performed in real time using surface rendering. However, volume rendering is usually used for realistic visualization of the 3D medical image. We have developed an algorithm to combine a volume-rendered image and a surface-rendered image using a Z-buffer for depth cueing, which is applied to a surgical simulator. Surface rendering is used for visualization of a medical instrument, whereas 3D medical images such as CT and MRI are usually visualized by volume rendering, because segmentation of the medical image is difficult. In this study, when the volume-rendered image is combined with the surface-rendered image, the amount of computation is reduced by early ray termination and instrument-region masking in the sheared image space. Using these methods, a fast combination of volume-rendered and surface-rendered images is performed with high image quality. The method is appropriate for real-time visualization of 3D medical images and medical instrument motion in the images, and can be applied to image-guided therapy and surgical simulators.

  15. Calcium isolation from large-volume human urine samples for 41Ca analysis by accelerator mass spectrometry.

    PubMed

    Miller, James J; Hui, Susanta K; Jackson, George S; Clark, Sara P; Einstein, Jane; Weaver, Connie M; Bhattacharyya, Maryka H

    2013-08-01

    Calcium oxalate precipitation is the first step in preparation of biological samples for (41)Ca analysis by accelerator mass spectrometry. A simplified protocol for large-volume human urine samples was characterized, with statistically significant increases in ion current and decreases in interference. This large-volume assay minimizes cost and effort and maximizes time after (41)Ca administration during which human samples, collected over a lifetime, provide (41)Ca:Ca ratios that are significantly above background.

  16. Predicting nurse staffing needs for a labor and birth unit in a large-volume perinatal service.

    PubMed

    Simpson, Kathleen Rice

    2015-01-01

    This project was designed to test a nurse staffing model for its ability to accurately determine staffing needs for a large-volume labor and birth unit based on a staffing gap analysis using the nurse staffing guidelines from the Association of Women's Health, Obstetric and Neonatal Nurses (AWHONN). The staffing model and the AWHONN staffing guidelines were found to be reliable methods to predict staffing needs for a large-volume labor and birth unit.

  17. Data-driven RANS for simulations of large wind farms

    NASA Astrophysics Data System (ADS)

    Iungo, G. V.; Viola, F.; Ciri, U.; Rotea, M. A.; Leonardi, S.

    2015-06-01

    In the wind energy industry there is a growing need for real-time predictions of wind turbine wake flows in order to optimize power plant control and inhibit detrimental wake interactions. To this aim, a data-driven RANS approach is proposed in order to achieve very low computational costs and adequate accuracy through the data assimilation procedure. The RANS simulations are implemented with a classical Boussinesq hypothesis and a mixing length turbulence closure model, which is calibrated through the available data. High-fidelity LES simulations of a utility-scale wind turbine operating with different tip speed ratios are used as database. It is shown that the mixing length model for the RANS simulations can be calibrated accurately through the Reynolds stress of the axial and radial velocity components, and the gradient of the axial velocity in the radial direction. It is found that the mixing length is roughly invariant in the very near wake, then it increases linearly with the downstream distance in the diffusive region. The variation rate of the mixing length in the downstream direction is proposed as a criterion to detect the transition between near wake and transition region of a wind turbine wake. Finally, RANS simulations were performed with the calibrated mixing length model, and a good agreement with the LES simulations is observed.

  18. Emerging selection bias in large-scale climate change simulations

    NASA Astrophysics Data System (ADS)

    Swanson, Kyle L.

    2013-06-01

    Climate change simulations are the output of enormously complicated models containing resolved and parameterized physical processes ranging in scale from microns to the size of the Earth itself. Given this complexity, the application of subjective criteria in model development is inevitable. Here we show one danger of the use of such criteria in the construction of these simulations, namely the apparent emergence of a selection bias between generations of these simulations. Earlier generation ensembles of model simulations are shown to possess sufficient diversity to capture recent observed shifts in both the mean surface air temperature as well as the frequency of extreme monthly mean temperature events due to climate warming. However, current generation ensembles of model simulations are statistically inconsistent with these observed shifts, despite a marked reduction in the spread among ensemble members that by itself suggests convergence towards some common solution. This convergence indicates the possibility of a selection bias based upon warming rate. It is hypothesized that this bias is driven by the desire to more accurately capture the observed recent acceleration of warming in the Arctic and corresponding decline in Arctic sea ice. However, this convergence is difficult to justify given the significant and widening discrepancy between the modeled and observed warming rates outside of the Arctic.

  19. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation.

    PubMed

    Breton, S-P; Sumner, J; Sørensen, J N; Hansen, K S; Sarmast, S; Ivanell, S

    2017-04-13

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions.This article is part of the themed issue 'Wind energy in complex terrains'.

  20. Large-scale multi-agent transportation simulations

    NASA Astrophysics Data System (ADS)

    Cetin, Nurhan; Nagel, Kai; Raney, Bryan; Voellmy, Andreas

    2002-08-01

    It is now possible to microsimulate the traffic of whole metropolitan areas with 10 million travelers or more, "micro" meaning that each traveler is resolved individually as a particle. In contrast to physics or chemistry, these particles have internal intelligence; for example, they know where they are going. This means that a transportation simulation project will have, besides the traffic microsimulation, modules which model this intelligent behavior. The most important modules are for route generation and for demand generation. Demand is generated by each individual in the simulation making a plan of activities such as sleeping, eating, working, shopping, etc. If activities are planned at different locations, they obviously generate demand for transportation. This however is not enough since those plans are influenced by congestion which initially is not known. This is solved via a relaxation method, which means iterating back and forth between the activities/routes generation and the traffic simulation.

  1. Manufacturing Process Simulation of Large-Scale Cryotanks

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  2. The big fat LARS - a LArge Reservoir Simulator for hydrate formation and gas production

    NASA Astrophysics Data System (ADS)

    Beeskow-Strauch, Bettina; Spangenberg, Erik; Schicks, Judith M.; Giese, Ronny; Luzi-Helbing, Manja; Priegnitz, Mike; Klump, Jens; Thaler, Jan; Abendroth, Sven

    2013-04-01

    Simulating natural scenarios on lab scale is a common technique to gain insight into geological processes with moderate effort and expenses. Due to the remote occurrence of gas hydrates, their behavior in sedimentary deposits is largely investigated on experimental set ups in the laboratory. In the framework of the submarine gas hydrate research project (SUGAR) a large reservoir simulator (LARS) with an internal volume of 425 liter has been designed, built and tested. To our knowledge this is presently a word-wide unique set up. Because of its large volume it is suitable for pilot plant scale tests on hydrate behavior in sediments. That includes not only the option of systematic tests on gas hydrate formation in various sedimentary settings but also the possibility to mimic scenarios for the hydrate decomposition and subsequent natural gas extraction. Based on these experimental results various numerical simulations can be realized. Here, we present the design and the experimental set up of LARS. The prerequisites for the simulation of a natural gas hydrate reservoir are porous sediments, methane, water, low temperature and high pressure. The reservoir is supplied by methane-saturated and pre-cooled water. For its preparation an external gas-water mixing stage is available. The methane-loaded water is continuously flushed into LARS as finely dispersed fluid via bottom-and-top-located sparger. The LARS is equipped with a mantle cooling system and can be kept at a chosen set temperature. The temperature distribution is monitored at 14 reasonable locations throughout the reservoir by Pt100 sensors. Pressure needs are realized using syringe pump stands. A tomographic system, consisting of a 375-electrode-configuration is attached to the mantle for the monitoring of hydrate distribution throughout the entire reservoir volume. Two sets of tubular polydimethylsiloxan-membranes are applied to determine gas-water ratio within the reservoir using the effect of permeability

  3. Large-eddy simulation of nitrogen injection at trans- and supercritical conditions

    NASA Astrophysics Data System (ADS)

    Müller, Hagen; Niedermeier, Christoph A.; Matheis, Jan; Pfitzner, Michael; Hickel, Stefan

    2016-01-01

    Large-eddy simulations (LESs) of cryogenic nitrogen injection into a warm environment at supercritical pressure are performed and real-gas thermodynamics models and subgrid-scale (SGS) turbulence models are evaluated. The comparison of different SGS models — the Smagorinsky model, the Vreman model, and the adaptive local deconvolution method — shows that the representation of turbulence on the resolved scales has a notable effect on the location of jet break-up, whereas the particular modeling of unresolved scales is less important for the overall mean flow field evolution. More important are the models for the fluid's thermodynamic state. The injected fluid is either in a supercritical or in a transcritical state and undergoes a pseudo-boiling process during mixing. Such flows typically exhibit strong density gradients that delay the instability growth and can lead to a redistribution of turbulence kinetic energy from the radial to the axial flow direction. We evaluate novel volume-translation methods on the basis of the cubic Peng-Robinson equation of state in the framework of LES. At small extra computational cost, their application considerably improves the simulation results compared to the standard formulation. Furthermore, we found that the choice of inflow temperature is crucial for the reproduction of the experimental results and that heat addition within the injector can affect the mean flow field in comparison to results with an adiabatic injector.

  4. Large-eddy simulation of nitrogen injection at trans- and supercritical conditions

    SciTech Connect

    Müller, Hagen; Pfitzner, Michael; Niedermeier, Christoph A.; Matheis, Jan; Hickel, Stefan

    2016-01-15

    Large-eddy simulations (LESs) of cryogenic nitrogen injection into a warm environment at supercritical pressure are performed and real-gas thermodynamics models and subgrid-scale (SGS) turbulence models are evaluated. The comparison of different SGS models — the Smagorinsky model, the Vreman model, and the adaptive local deconvolution method — shows that the representation of turbulence on the resolved scales has a notable effect on the location of jet break-up, whereas the particular modeling of unresolved scales is less important for the overall mean flow field evolution. More important are the models for the fluid’s thermodynamic state. The injected fluid is either in a supercritical or in a transcritical state and undergoes a pseudo-boiling process during mixing. Such flows typically exhibit strong density gradients that delay the instability growth and can lead to a redistribution of turbulence kinetic energy from the radial to the axial flow direction. We evaluate novel volume-translation methods on the basis of the cubic Peng-Robinson equation of state in the framework of LES. At small extra computational cost, their application considerably improves the simulation results compared to the standard formulation. Furthermore, we found that the choice of inflow temperature is crucial for the reproduction of the experimental results and that heat addition within the injector can affect the mean flow field in comparison to results with an adiabatic injector.

  5. Low-Dissipation Advection Schemes Designed for Large Eddy Simulations of Hypersonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    White, Jeffrey A.; Baurle, Robert A.; Fisher, Travis C.; Quinlan, Jesse R.; Black, William S.

    2012-01-01

    The 2nd-order upwind inviscid flux scheme implemented in the multi-block, structured grid, cell centered, finite volume, high-speed reacting flow code VULCAN has been modified to reduce numerical dissipation. This modification was motivated by the desire to improve the codes ability to perform large eddy simulations. The reduction in dissipation was accomplished through a hybridization of non-dissipative and dissipative discontinuity-capturing advection schemes that reduces numerical dissipation while maintaining the ability to capture shocks. A methodology for constructing hybrid-advection schemes that blends nondissipative fluxes consisting of linear combinations of divergence and product rule forms discretized using 4th-order symmetric operators, with dissipative, 3rd or 4th-order reconstruction based upwind flux schemes was developed and implemented. A series of benchmark problems with increasing spatial and fluid dynamical complexity were utilized to examine the ability of the candidate schemes to resolve and propagate structures typical of turbulent flow, their discontinuity capturing capability and their robustness. A realistic geometry typical of a high-speed propulsion system flowpath was computed using the most promising of the examined schemes and was compared with available experimental data to demonstrate simulation fidelity.

  6. Safety of large-volume leukapheresis for collection of peripheral blood progenitor cells.

    PubMed

    Reik, R A; Noto, T A; Fernandez, H F

    1997-01-01

    Large volume leukapheresis (LVL) reduces the number of procedures required to obtain adequate peripheral blood progenitor cells (PBPCs) for autologous hematopoietic reconstitution. LVL involves the processing of > 15 L or 5 patient blood volumes using high flow rates. We report our experience with LVL evaluating its efficiency and adverse effects in 71 adult patients with hematologic or solid organ malignancies. All were mobilized with chemotherapy and granulocyte colony-stimulating factor (G-CSF). All collections used a double lumen apheresis catheter. Mean values per LVL were as follows: blood processed, 24.6 L; patient blood volumes processed, 5.9; ACD-A used, 1,048 ml; heparin used, 6,148 units; collect time, 290 min; blood flow rate, 89 ml/min. Eighty percent of the collections were completed in one or two procedures to obtain > or = 6.0 x 10(8) MNCs/kg body weight. The most frequent side effect (39%) was parasthesia due to citrate-related hypocalcemia. This was managed with oral calcium supplements and/or slower flow rates. Post-LVL electrolyte changes were generally asymptomatic. Prophylactic oral potassium supplements were administered in 57% of cases. Other reactions included hypotension (4%), prolonged parasthesia (1.4%), and headache (1.4%). Catheter problems in 9 (13%) of the procedures were attributed to clot formation (37%) or positional effects (63%). No bleeding occurred. Post-LVL decreases in hematocrit and platelet count averaged 3.5% and 46%, respectively. Six (4%) of the procedures required red blood cell transfusions. Platelet transfusions were given in 19 (13%) of the procedures. We conclude that adverse reactions with LVL are similar to those reported for conventional PBPC collections, making it safe and efficacious as an outpatient procedure.

  7. Collection of more hematopoietic progenitor cells with large volume leukapheresis in patients with multiple myeloma.

    PubMed

    Desikan, K R; Jagannath, S; Siegel, D; Nelson, J; Bracy, D; Barlogie, B; Tricot, G

    1998-02-01

    Reinfusion of mobilized peripheral blood stem cells (PBSC) after high dose chemotherapy accelerates hematopoietic recovery. Because of the relatively low content of hematopoietic progenitors in the peripheral blood even after mobilization, multiple leukapheresis procedures are necessary to reach the required target number of CD34 cells to ensure prompt engraftment post-transplantation. Our previous studies have shown that the highest proportions of hematopoietic progenitors cells (CD34) are collected during the first three days of apheresis, whereas peak levels of myeloma cells are observed during subsequent days. Therefore, large volume leukapheresis (LVL), defined as processing of greater than 3 blood volumes or a total of at least 15 liters, was explored in 23 myeloma patients, undergoing 91 procedures; 14 patients were mobilized with high dose cyclophosphamide (6g/m2) and hematopoietic growth factors and 9 with G-CSF only. CD34 yields were measured separately for the first and last two hours of collection. We observed no decrease in CD34 cells/kg during the last two hours of collection and when the LVL collections were compared to historical matched controls, mobilized with the same regimen, the median quantity of CD34 cells/kg/liter collected remained equivalent during all days of apheresis. When compared to G-CSF only, mobilization with high dose cyclophosphamide appeared to result in superior hematopoietic stem cell collections. Interestingly, the G-CSF group experienced a progressive decrease in platelets during consecutive days of LVL, while the opposite was seen in the cyclophosphamide group. LVL procedures were not associated with a higher complication rate than standard volume apheresis. We conclude that LVL procedures allow collection of more CD34 cell per session while not jeopardizing progenitor cell collections during subsequent sessions. Since more CD34 cells are collected, fewer days are required to attain the optimal target of progenitor cells

  8. Towards large eddy and direct simulation of complex turbulent flows

    NASA Technical Reports Server (NTRS)

    Moin, Parviz

    1991-01-01

    Recent advances in the methodology for direct numerical simulation of turbulent flows and some of the current applications are reviewed. It is argued that high-order finite difference schemes yield solutions with comparable accuracy to the spectral methods with the same number of degrees of freedom. The effects of random inflow conditions on the downstream evolution of turbulence are discussed.

  9. Optimal whole-body PET scanner configurations for different volumes of LSO scintillator: a simulation study

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Moses, William W.; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R.; Badawi, Ramsey D.

    2012-07-01

    The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm

  10. Optimal whole-body PET scanner configurations for different volumes of LSO scintillator: a simulation study.

    PubMed

    Poon, Jonathan K; Dahlbom, Magnus L; Moses, William W; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R; Badawi, Ramsey D

    2012-07-07

    The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with

  11. Design, simulation, and optimization of an RGB polarization independent transmission volume hologram

    NASA Astrophysics Data System (ADS)

    Mahamat, Adoum Hassan

    Volume phase holographic (VPH) gratings have been designed for use in many areas of science and technology such as optical communication, medical imaging, spectroscopy and astronomy. The goal of this dissertation is to design a volume phase holographic grating that provides diffraction efficiencies of at least 70% for the entire visible wavelengths and higher than 90% for red, green, and blue light when the incident light is unpolarized. First, the complete design, simulation and optimization of the volume hologram are presented. The optimization is done using a Monte Carlo analysis to solve for the index modulation needed to provide higher diffraction efficiencies. The solutions are determined by solving the diffraction efficiency equations determined by Kogelnik's two wave coupled-wave theory. The hologram is further optimized using the rigorous coupled-wave analysis to correct for effects of absorption omitted by Kogelnik's method. Second, the fabrication or recording process of the volume hologram is described in detail. The active region of the volume hologram is created by interference of two coherent beams within the thin film. Third, the experimental set up and measurement of some properties including the diffraction efficiencies of the volume hologram, and the thickness of the active region are conducted. Fourth, the polarimetric response of the volume hologram is investigated. The polarization study is developed to provide insight into the effect of the refractive index modulation onto the polarization state and diffraction efficiency of incident light.

  12. Large-Volume Resonant Microwave Discharge for Plasma Cleaning of a CEBAF 5-Cell SRF Cavity

    SciTech Connect

    J. Mammosser, S. Ahmed, K. Macha, J. Upadhyay, M. Nikoli, S. Popovi, L. Vuakovi

    2012-07-01

    We report the preliminary results on plasma generation in a 5-cell CEBAF superconducting radio-frequency (SRF) cavity for the application of cavity interior surface cleaning. CEBAF currently has {approx}300 of these five cell cavities installed in the Jefferson Lab accelerator which are mostly limited by cavity surface contamination. The development of an in-situ cavity surface cleaning method utilizing a resonant microwave discharge could lead to significant CEBAF accelerator performance improvement. This microwave discharge is currently being used for the development of a set of plasma cleaning procedures targeted to the removal of various organic, metal and metal oxide impurities. These contaminants are responsible for the increase of surface resistance and the reduction of RF performance in installed cavities. The CEBAF five cell cavity volume is {approx} 0.5 m2, which places the discharge in the category of large-volume plasmas. CEBAF cavity has a cylindrical symmetry, but its elliptical shape and transversal power coupling makes it an unusual plasma application, which requires special consideration of microwave breakdown. Our preliminary study includes microwave breakdown and optical spectroscopy, which was used to define the operating pressure range and the rate of removal of organic impurities.

  13. A large volume cell for in situ neutron diffraction studies of hydrothermal crystallizations.

    PubMed

    Xia, Fang; Qian, Gujie; Brugger, Joël; Studer, Andrew; Olsen, Scott; Pring, Allan

    2010-10-01

    A hydrothermal cell with 320 ml internal volume has been designed and constructed for in situ neutron diffraction studies of hydrothermal crystallizations. The cell design adopts a dumbbell configuration assembled with standard commercial stainless steel components and a zero-scattering Ti-Zr alloy sample compartment. The fluid movement and heat transfer are simply driven by natural convection due to the natural temperature gradient along the fluid path, so that the temperature at the sample compartment can be stably sustained by heating the fluid in the bottom fluid reservoir. The cell can operate at temperatures up to 300 °C and pressures up to 90 bars and is suitable for studying reactions requiring a large volume of hydrothermal fluid to damp out the negative effect from the change of fluid composition during the course of the reactions. The capability of the cell was demonstrated by a hydrothermal phase transformation investigation from leucite (KAlSi(2)O(6)) to analcime (NaAlSi(2)O(6)⋅H(2)O) at 210 °C on the high intensity powder diffractometer Wombat in ANSTO. The kinetics of the transformation has been resolved by collecting diffraction patterns every 10 min followed by Rietveld quantitative phase analysis. The classical Avrami/Arrhenius analysis gives an activation energy of 82.3±1.1 kJ  mol(-1). Estimations of the reaction rate under natural environments by extrapolations agree well with petrological observations.

  14. A large volume cell for in situ neutron diffraction studies of hydrothermal crystallizations

    NASA Astrophysics Data System (ADS)

    Xia, Fang; Qian, Gujie; Brugger, Joël; Studer, Andrew; Olsen, Scott; Pring, Allan

    2010-10-01

    A hydrothermal cell with 320 ml internal volume has been designed and constructed for in situ neutron diffraction studies of hydrothermal crystallizations. The cell design adopts a dumbbell configuration assembled with standard commercial stainless steel components and a zero-scattering Ti-Zr alloy sample compartment. The fluid movement and heat transfer are simply driven by natural convection due to the natural temperature gradient along the fluid path, so that the temperature at the sample compartment can be stably sustained by heating the fluid in the bottom fluid reservoir. The cell can operate at temperatures up to 300 °C and pressures up to 90 bars and is suitable for studying reactions requiring a large volume of hydrothermal fluid to damp out the negative effect from the change of fluid composition during the course of the reactions. The capability of the cell was demonstrated by a hydrothermal phase transformation investigation from leucite (KAlSi2O6) to analcime (NaAlSi2O6ṡH2O) at 210 °C on the high intensity powder diffractometer Wombat in ANSTO. The kinetics of the transformation has been resolved by collecting diffraction patterns every 10 min followed by Rietveld quantitative phase analysis. The classical Avrami/Arrhenius analysis gives an activation energy of 82.3±1.1 kJ mol-1. Estimations of the reaction rate under natural environments by extrapolations agree well with petrological observations.

  15. A Scanning Transmission Electron Microscopy (STEM) Approach to Analyzing Large Volumes of Tissue to Detect Nanoparticles

    PubMed Central

    Kempen, Paul J.; Thakor, Avnesh S.; Zavaleta, Cristina; Gambhir, Sanjiv S.; Sinclair, Robert

    2013-01-01

    The use of nanoparticles for the diagnosis and treatment of cancer requires the complete characterization of their toxicity, including accurately locating them within biological tissues. Owing to their size, traditional light microscopy techniques are unable to resolve them. Transmission electron microscopy provides the necessary spatial resolution to image individual nanoparticles in tissue but is severely limited by the very small analysis volume, usually on the order of tens of cubic microns. In this work we developed a scanning transmission electron microscopy (STEM) approach to analyze large volumes of tissue for the presence of polyethylene glycol coated Raman-active-silica-gold-nanoparticles (PEG-R-Si-Au-NPs). This approach utilizes the simultaneous bright and dark field imaging capabilities of STEM along with careful control of the image contrast settings to readily identify PEG-R-Si-Au-NPs in mouse liver tissue without the need for additional time consuming analytical characterization. We utilized this technique to analyze 243,000 µm3 of mouse liver tissue for the presence of PEG-R-Si-Au-NPs. Nanoparticles injected into the mice intravenously via the tail-vein accumulated in the liver while those injected intrarectally did not, indicating that they remain in the colon and do not pass through the colon wall into the systemic circulation. PMID:23803218

  16. A large volume uniform plasma generator for the experiments of electromagnetic wave propagation in plasma

    SciTech Connect

    Yang Min; Li Xiaoping; Xie Kai; Liu Donglin; Liu Yanming

    2013-01-15

    A large volume uniform plasma generator is proposed for the experiments of electromagnetic (EM) wave propagation in plasma, to reproduce a 'black out' phenomenon with long duration in an environment of the ordinary laboratory. The plasma generator achieves a controllable approximate uniform plasma in volume of 260 mm Multiplication-Sign 260 mm Multiplication-Sign 180 mm without the magnetic confinement. The plasma is produced by the glow discharge, and the special discharge structure is built to bring a steady approximate uniform plasma environment in the electromagnetic wave propagation path without any other barriers. In addition, the electron density and luminosity distributions of plasma under different discharge conditions were diagnosed and experimentally investigated. Both the electron density and the plasma uniformity are directly proportional to the input power and in roughly reverse proportion to the gas pressure in the chamber. Furthermore, the experiments of electromagnetic wave propagation in plasma are conducted in this plasma generator. Blackout phenomena at GPS signal are observed under this system and the measured attenuation curve is of reasonable agreement with the theoretical one, which suggests the effectiveness of the proposed method.

  17. A scanning transmission electron microscopy approach to analyzing large volumes of tissue to detect nanoparticles.

    PubMed

    Kempen, Paul J; Thakor, Avnesh S; Zavaleta, Cristina; Gambhir, Sanjiv S; Sinclair, Robert

    2013-10-01

    The use of nanoparticles for the diagnosis and treatment of cancer requires the complete characterization of their toxicity, including accurately locating them within biological tissues. Owing to their size, traditional light microscopy techniques are unable to resolve them. Transmission electron microscopy provides the necessary spatial resolution to image individual nanoparticles in tissue, but is severely limited by the very small analysis volume, usually on the order of tens of cubic microns. In this work, we developed a scanning transmission electron microscopy (STEM) approach to analyze large volumes of tissue for the presence of polyethylene glycol-coated Raman-active-silica-gold-nanoparticles (PEG-R-Si-Au-NPs). This approach utilizes the simultaneous bright and dark field imaging capabilities of STEM along with careful control of the image contrast settings to readily identify PEG-R-Si-Au-NPs in mouse liver tissue without the need for additional time-consuming analytical characterization. We utilized this technique to analyze 243,000 mm³ of mouse liver tissue for the presence of PEG-R-Si-Au-NPs. Nanoparticles injected into the mice intravenously via the tail vein accumulated in the liver, whereas those injected intrarectally did not, indicating that they remain in the colon and do not pass through the colon wall into the systemic circulation.

  18. Detection of fast flying nanoparticles by light scattering over a large volume

    NASA Astrophysics Data System (ADS)

    Pettazzi, F.; Bäumer, S.; van der Donck, J.; Deutz, A.

    2015-06-01

    is a well-known detection method which is applied in many different scientific and technology domains including atmospheric physics, environmental control, and biology. It allows contactless and remote detection of sub-micron size particles. However, methods for detecting a single fast moving particle smaller than 100 nm are lacking. In the present work we report a preliminary design study of an inline large area detector for nanoparticles larger than 50 nm which move with velocities up to 100 m/s. The detector design is based on light scattering using commercially available components. The presented design takes into account all challenges connected to the inline implementation of the scattering technique in the system: the need for the detector to have a large field of view to cover a volume with a footprint commensurate to an area of 100mm x 100mm, the necessity to sense nanoparticles transported at high velocity, and the requirement of large capture rate with a false detection as low as one false positive per week. The impact of all these stringent requirements on the expected sensitivity and performances of the device is analyzed by mean of a dedicated performance model.

  19. Comparing selected morphological models of hydrated Nafion using large scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Knox, Craig K.

    Experimental elucidation of the nanoscale structure of hydrated Nafion, the most popular polymer electrolyte or proton exchange membrane (PEM) to date, and its influence on macroscopic proton conductance is particularly challenging. While it is generally agreed that hydrated Nafion is organized into distinct hydrophilic domains or clusters within a hydrophobic matrix, the geometry and length scale of these domains continues to be debated. For example, at least half a dozen different domain shapes, ranging from spheres to cylinders, have been proposed based on experimental SAXS and SANS studies. Since the characteristic length scale of these domains is believed to be ˜2 to 5 nm, very large molecular dynamics (MD) simulations are needed to accurately probe the structure and morphology of these domains, especially their connectivity and percolation phenomena at varying water content. Using classical, all-atom MD with explicit hydronium ions, simulations have been performed to study the first-ever hydrated Nafion systems that are large enough (~2 million atoms in a ˜30 nm cell) to directly observe several hydrophilic domains at the molecular level. These systems consisted of six of the most significant and relevant morphological models of Nafion to-date: (1) the cluster-channel model of Gierke, (2) the parallel cylinder model of Schmidt-Rohr, (3) the local-order model of Dreyfus, (4) the lamellar model of Litt, (5) the rod network model of Kreuer, and (6) a 'random' model, commonly used in previous simulations, that does not directly assume any particular geometry, distribution, or morphology. These simulations revealed fast intercluster bridge formation and network percolation in all of the models. Sulfonates were found inside these bridges and played a significant role in percolation. Sulfonates also strongly aggregated around and inside clusters. Cluster surfaces were analyzed to study the hydrophilic-hydrophobic interface. Interfacial area and cluster volume

  20. Isolation of organic acids from large volumes of water by adsorption chromatography

    USGS Publications Warehouse

    Aiken, George R.

    1984-01-01

    The concentrations of dissolved organic carbon from most natural waters ranges from 1 to 20 milligrams carbon per liter, of which approximately 75 percent are organic acids. These acids can be chromatographically fractionated into hydrophobic organic acids, such as humic substances, and hydrophilic organic acids. To effectively study any of these organic acids, they must be isolated from other organic and inorganic species, and concentrated. Usually, large volumes of water must be processed to obtain sufficient quantities of material, and adsorption chromatography on synthetic, macroporous resins has proven to be a particularly effective method for this purpose. The use of the nonionic Amberlite XAD-8 and Amberlite XAD-4 resins and the anion exchange resin Duolite A-7 for isolating and concentrating organic acids from water is presented.

  1. A large volume 2000 MPA air source for the radiatively driven hypersonic wind tunnel

    SciTech Connect

    Constantino, M

    1999-07-14

    An ultra-high pressure air source for a hypersonic wind tunnel for fluid dynamics and combustion physics and chemistry research and development must provide a 10 kg/s pure air flow for more than 1 s at a specific enthalpy of more than 3000 kJ/kg. The nominal operating pressure and temperature condition for the air source is 2000 MPa and 900 K. A radial array of variable radial support intensifiers connected to an axial manifold provides an arbitrarily large total high pressure volume. This configuration also provides solutions to cross bore stress concentrations and the decrease in material strength with temperature. [hypersonic, high pressure, air, wind tunnel, ground testing

  2. Monte Carlo calculations of the HPGe detector efficiency for radioactivity measurement of large volume environmental samples.

    PubMed

    Azbouche, Ahmed; Belgaid, Mohamed; Mazrou, Hakim

    2015-08-01

    A fully detailed Monte Carlo geometrical model of a High Purity Germanium detector with a (152)Eu source, packed in Marinelli beaker, was developed for routine analysis of large volume environmental samples. Then, the model parameters, in particular, the dead layer thickness were adjusted thanks to a specific irradiation configuration together with a fine-tuning procedure. Thereafter, the calculated efficiencies were compared to the measured ones for standard samples containing (152)Eu source filled in both grass and resin matrices packed in Marinelli beaker. From this comparison, a good agreement between experiment and Monte Carlo calculation results was obtained highlighting thereby the consistency of the geometrical computational model proposed in this work. Finally, the computational model was applied successfully to determine the (137)Cs distribution in soil matrix. From this application, instructive results were achieved highlighting, in particular, the erosion and accumulation zone of the studied site.

  3. Improved large-volume sampler for the collection of bacterial cells from aerosol.

    PubMed

    White, L A; Hadley, D J; Davids, D E; Naylor, R

    1975-03-01

    A modified large-volume sampler was demonstrated to be an efficient device for the collection of mono-disperse aerosols of rhodamine B and poly-disperse aerosols of bacterial cells. Absolute efficiency for collection of rhodamine B varied from 100% with 5-mum particles to about 70% with 0.5-mum particles. The sampler concentrated the particles from 950 liters of air into a flow of between 1 and 2 ml of collecting fluid per min. Spores of Bacillus subtilis var. niger were collected at an efficiency of about 82% compared to the collection in the standard AGI-30 sampler. In the most desirable collecting fluids tested, aerosolized cells of Serratia marcescens, Escherichia coli, and Aerobacter aerogenes were collected at comparative efficiencies of approximately 90, 80, and 90%, respectively. The modified sampler has practical application in the study of aerosol transmission of respiratory pathogens.

  4. AC Magnetic Properties of Large Volume of Water — Susceptibility Measurement in Unshielded Environment

    NASA Astrophysics Data System (ADS)

    Tsukada, Keiji; Kiwa, Toshihiko; Masuda, Yuuki

    2006-10-01

    To investigate the effect of low-frequency magnetic-field exposure of a human body, the low-frequency AC magnetic property of a large volume of water was measured by low-frequency magnetic field exposure (from 50 Hz to 1.2 kHz). The results indicate that the AC magnetic property of water is due to diamagnetism in the low-frequency range. The phase between the main magnetic field and the generated magnetic field remained constant at about 180°. Results were not affected by conductivity or pH. Moreover, the magnetic-field strength from water showed a susceptibility frequency dependence proportional to the frequency above approximately 400 Hz. Because of the incremental effects of susceptibility, the magnetic field from water was measured using a conventional magnetic sensor (magnetic resistive; MR) in an unshielded environment.

  5. Isolation of organic acids from large volumes of water by adsorption on macroporous resins

    USGS Publications Warehouse

    Aiken, George R.; Suffet, I.H.; Malaiyandi, Murugan

    1987-01-01

    Adsorption on synthetic macroporous resins, such as the Amberlite XAD series and Duolite A-7, is routinely used to isolate and concentrate organic acids from forge volumes of water. Samples as large as 24,500 L have been processed on site by using these resins. Two established extraction schemes using XAD-8 and Duolite A-7 resins are described. The choice of the appropriate resin and extraction scheme is dependent on the organic solutes of interest. The factors that affect resin performance, selectivity, and capacity for a particular solute are solution pH, resin surface area and pore size, and resin composition. The logistical problems of sample handling, filtration, and preservation are also discussed.

  6. Measurement of the velocity of neutrinos from the CNGS beam with the large volume detector.

    PubMed

    Agafonova, N Yu; Aglietta, M; Antonioli, P; Ashikhmin, V V; Bari, G; Bertoni, R; Bressan, E; Bruno, G; Dadykin, V L; Fulgione, W; Galeotti, P; Garbini, M; Ghia, P L; Giusti, P; Kemp, E; Mal'gin, A S; Miguez, B; Molinario, A; Persiani, R; Pless, I A; Ryasny, V G; Ryazhskaya, O G; Saavedra, O; Sartorelli, G; Shakyrianova, I R; Selvi, M; Trinchero, G C; Vigorito, C; Yakushev, V F; Zichichi, A; Razeto, A

    2012-08-17

    We report the measurement of the time of flight of ∼17 GeV ν(μ) on the CNGS baseline (732 km) with the Large Volume Detector (LVD) at the Gran Sasso Laboratory. The CERN-SPS accelerator has been operated from May 10th to May 24th 2012, with a tightly bunched-beam structure to allow the velocity of neutrinos to be accurately measured on an event-by-event basis. LVD has detected 48 neutrino events, associated with the beam, with a high absolute time accuracy. These events allow us to establish the following limit on the difference between the neutrino speed and the light velocity: -3.8 × 10(-6) < (v(ν)-c)/c < 3.1 × 10(-6) (at 99% C.L.). This value is an order of magnitude lower than previous direct measurements.

  7. Aerodynamics of the Large-Volume, Flow-Through Detector System. Final report

    SciTech Connect

    Reed, H.; Saric, W.; Laananen, D.; Martinez, C.; Carrillo, R.; Myers, J.; Clevenger, D.

    1996-03-01

    The Large-Volume Flow-Through Detector System (LVFTDS) was designed to monitor alpha radiation from Pu, U, and Am in mixed-waste incinerator offgases; however, it can be adapted to other important monitoring uses that span a number of potential markets, including site remediation, indoor air quality, radon testing, and mine shaft monitoring. Goal of this effort was to provide mechanical design information for installation of LVFTDS in an incinerator, with emphasis on ability to withstand the high temperatures and high flow rates expected. The work was successfully carried out in three stages: calculation of pressure drop through the system, materials testing to determine surrogate materials for wind-tunnel testing, and wind-tunnel testing of an actual configuration.

  8. Studies on plasma production in a large volume system using multiple compact ECR plasma sources

    NASA Astrophysics Data System (ADS)

    Tarey, R. D.; Ganguli, A.; Sahu, D.; Narayanan, R.; Arora, N.

    2017-01-01

    This paper presents a scheme for large volume plasma production using multiple highly portable compact ECR plasma sources (CEPS) (Ganguli et al 2016 Plasma Source Sci. Technol. 25 025026). The large volume plasma system (LVPS) described in the paper is a scalable, cylindrical vessel of diameter  ≈1 m, consisting of source and spacer sections with multiple CEPS mounted symmetrically on the periphery of the source sections. Scaling is achieved by altering the number of source sections/the number of sources in a source section or changing the number of spacer sections for adjusting the spacing between the source sections. A series of plasma characterization experiments using argon gas were conducted on the LVPS under different configurations of CEPS, source and spacer sections, for an operating pressure in the range 0.5-20 mTorr, and a microwave power level in the range 400-500 W per source. Using Langmuir probes (LP), it was possible to show that the plasma density (~1  -  2  ×  1011 cm-3) remains fairly uniform inside the system and decreases marginally close to the chamber wall, and this uniformity increases with an increase in the number of sources. It was seen that a warm electron population (60-80 eV) is always present and is about 0.1% of the bulk plasma density. The mechanism of plasma production is discussed in light of the results obtained for a single CEPS (Ganguli et al 2016 Plasma Source Sci. Technol. 25 025026).

  9. Rapid concentration of Bacillus and Clostridium spores from large volumes of milk, using continuous flow centrifugation.

    PubMed

    Agoston, Réka; Soni, Kamlesh A; McElhany, Katherine; Cepeda, Martha L; Zuckerman, Udi; Tzipori, Saul; Mohácsi-Farkas, Csilla; Pillai, Suresh D

    2009-03-01

    Deliberate or accidental contamination of foods such as milk, soft drinks, and drinking water with infectious agents or toxins is a major concern to health authorities. There is a critical need to develop technologies that can rapidly and efficiently separate and concentrate biothreat agents from food matrices. A key limitation of current centrifugation and filtration technologies is that they are batch processes with extensive hands-on involvement and processing times. The objective of our studies was to evaluate the continuous flow centrifugation (CFC) technique for the rapid separation and concentration of bacterial spores from large volumes of milk. We determined the effectiveness of the CFC technology for concentrating approximately 10(3) bacterial spores in 3.7 liters (1 gal) of whole milk and skim milk, using Bacillus subtilis, Bacillus atrophaeus, and Clostridium sporogenes spores as surrogates for biothreat agents. The spores in the concentrated samples were enumerated by using standard plating techniques. Three independent experiments were performed at 10,000 rpm and 0.7 liters/min flow rate. The mean B. subtilis spore recoveries were 71.3 and 56.5% in skim and whole milk, respectively, and those for B. atrophaeus were 55 and 59.3% in skim and whole milk, respectively. In contrast, mean C. sporogenes spore recoveries were 88.2 and 78.6% in skim and whole milk, respectively. The successful use of CFC to concentrate these bacterial spores from 3.7 liters of milk in 10 min shows promise for rapidly concentrating other spores from large volumes of milk.

  10. Anatomic Landmarks Versus Fiducials for Volume-Staged Gamma Knife Radiosurgery for Large Arteriovenous Malformations

    SciTech Connect

    Petti, Paula L. . E-mail: ppetti@radonc.ucsf.edu; Coleman, Joy; McDermott, Michael; Smith, Vernon; Larson, David A.

    2007-04-01

    Purpose: The purpose of this investigation was to compare the accuracy of using internal anatomic landmarks instead of surgically implanted fiducials in the image registration process for volume-staged gamma knife (GK) radiosurgery for large arteriovenous malformations. Methods and Materials: We studied 9 patients who had undergone 10 staged GK sessions for large arteriovenous malformations. Each patient had fiducials surgically implanted in the outer table of the skull at the first GK treatment. These markers were imaged on orthogonal radiographs, which were scanned into the GK planning system. For the same patients, 8-10 pairs of internal landmarks were retrospectively identified on the three-dimensional time-of-flight magnetic resonance imaging studies that had been obtained for treatment. The coordinate transformation between the stereotactic frame space for subsequent treatment sessions was then determined by point matching, using four surgically embedded fiducials and then using four pairs of internal anatomic landmarks. In both cases, the transformation was ascertained by minimizing the chi-square difference between the actual and the transformed coordinates. Both transformations were then evaluated using the remaining four to six pairs of internal landmarks as the test points. Results: Averaged over all treatment sessions, the root mean square discrepancy between the coordinates of the transformed and actual test points was 1.2 {+-} 0.2 mm using internal landmarks and 1.7 {+-} 0.4 mm using the surgically implanted fiducials. Conclusion: The results of this study have shown that using internal landmarks to determine the coordinate transformation between subsequent magnetic resonance imaging scans for volume-staged GK arteriovenous malformation treatment sessions is as accurate as using surgically implanted fiducials and avoids an invasive procedure.

  11. Large Volume Coagulation Utilizing Multiple Cavitation Clouds Generated by Array Transducer Driven by 32 Channel Drive Circuits

    NASA Astrophysics Data System (ADS)

    Nakamura, Kotaro; Asai, Ayumu; Sasaki, Hiroshi; Yoshizawa, Shin; Umemura, Shin-ichiro

    2013-07-01

    High-intensity focused ultrasound (HIFU) treatment is a noninvasive treatment, in which focused ultrasound is generated outside the body and coagulates a diseased tissue. The advantage of this method is minimal physical and mental stress to the patient, and the disadvantage is the long treatment time caused by the smallness of the therapeutic volume by a single exposure. To improve the efficiency and shorten the treatment time, we are focusing attention on utilizing cavitation bubbles. The generated microbubbles can convert the acoustic energy into heat with a high efficiency. In this study, using the class D amplifiers, which we have developed, to drive the array transducer, we demonstrate a new method to coagulate a large volume by a single HIFU exposure through generating cavitation bubbles distributing in a large volume and vibrating all of them. As a result, the coagulated volume by the proposed method was 1.71 times as large as that of the conventional method.

  12. Large-volume hot spots in gold spiky nanoparticle dimers for high-performance surface-enhanced spectroscopy.

    PubMed

    Li, Anran; Li, Shuzhou

    2014-11-07

    Hot spots with a large electric field enhancement usually come in small volumes, limiting their applications in surface-enhanced spectroscopy. Using a finite-difference time-domain method, we demonstrate that spiky nanoparticle dimers (SNPD) can provide hot spots with both large electric field enhancement and large volumes because of the pronounced lightning rod effect of spiky nanoparticles. We find that the strongest electric fields lie in the gap region when SNPD is in a tip-to-tip (T-T) configuration. The enhancement of electric fields (|E|(2)/|E0|(2)) in T-T SNPD with a 2 nm gap can be as large as 1.21 × 10(6). And the hot spot volume in T-T SNPD is almost 7 times and 5 times larger than those in the spike dimer and sphere dimer with the same gap size of 2 nm, respectively. The hot spot volume in SNPD can be further improved by manipulating the arrangements of spiky nanoparticles, where crossed T-T SNPD provides the largest hot spot volume, which is 1.5 times that of T-T SNPD. Our results provide a strategy to obtain hot spots with both intense electric fields and large volume by adding a bulky core at one end of the spindly building block in dimers.

  13. A finite volume solver for three dimensional debris flow simulations based on a single calibration parameter

    NASA Astrophysics Data System (ADS)

    von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian; Rickenmann, Dieter

    2016-04-01

    Debris flows are frequent natural hazards that cause massive damage. A wide range of debris flow models try to cover the complex flow behavior that arises from the inhomogeneous material mixture of water with clay, silt, sand, and gravel. The energy dissipation between moving grains depends on grain collisions and tangential friction, and the viscosity of the interstitial fine material suspension depends on the shear gradient. Thus a rheology description needs to be sensitive to the local pressure and shear rate, making the three-dimensional flow structure a key issue for flows in complex terrain. Furthermore, the momentum exchange between the granular and fluid phases should account for the presence of larger particles. We model the fine material suspension with a Herschel-Bulkley rheology law, and represent the gravel with the Coulomb-viscoplastic rheology of Domnik & Pudasaini (Domnik et al. 2013). Both composites are described by two phases that can mix; a third phase accounting for the air is kept separate to account for the free surface. The fluid dynamics are solved in three dimensions using the finite volume open-source code OpenFOAM. Computational costs are kept reasonable by using the Volume of Fluid method to solve only one phase-averaged system of Navier-Stokes equations. The Herschel-Bulkley parameters are modeled as a function of water content, volumetric solid concentration of the mixture, clay content and its mineral composition (Coussot et al. 1989, Yu et al. 2013). The gravel phase properties needed for the Coulomb-viscoplastic rheology are defined by the angle of repose of the gravel. In addition to this basic setup, larger grains and the corresponding grain collisions can be introduced by a coupled Lagrangian particle simulation. Based on the local Savage number a diffusive term in the gravel phase can activate phase separation. The resulting model can reproduce the sensitivity of the debris flow to water content and channel bed roughness, as

  14. Computer simulation of the precordial QRS complex: effects of simulated changes in ventricular wall thickness and volume.

    PubMed

    Salu, Y; Marcus, M L

    1976-12-01

    The cardiac electric field generated by depolarization of the human ventricle is simulated with a computer model which utilizes 1,500 dipoles. The configuration of the ventricles utilized in the model assumed that the cross-sectional shape of the left ventricle was circular and the right ventricular free wall was a portion of an ellipse. The torso was assumed to be homogeneous and infinite. The activation sequence was based on the measurements of Durrer. The depolarizational wave was simulated by dipole layers. The output of the model is presented as a standard multilead precordial ECG. The ECG complexes generated by the model closely resemble the precordial QRS complexes of normal man. Simulated increases in wall thickness (1 to 2.2 X control) were associated with changes in the calculated precordial QRS complexes which were characteristic of left ventricular hypertrophy. Voltage (R in V5 or V6 and S in V1) and QRS duration increased linearly as a function of calculated left ventricular mass. Increases in ventricular activation time were related nonlinearly to changes in left ventricular mass and did not occur in the absence of a simulated increase in wall thickness. The effects of simulated changes in left ventricular volume (0.6 to 3.0 X control) on the QRS complex were mainly dependent on the resultant increase in left ventricular mass. This model may be useful in simulating the precordial QRS complexes that result from isolated or combined changes in ventricular volume or wall thickness or other disorders of the heart. Furthermore, it may be useful whenever a simulation of a QRS generator is needed.

  15. Large eddy simulations of turbulent flows on graphics processing units: Application to film-cooling flows

    NASA Astrophysics Data System (ADS)

    Shinn, Aaron F.

    Computational Fluid Dynamics (CFD) simulations can be very computationally expensive, especially for Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) of turbulent ows. In LES the large, energy containing eddies are resolved by the computational mesh, but the smaller (sub-grid) scales are modeled. In DNS, all scales of turbulence are resolved, including the smallest dissipative (Kolmogorov) scales. Clusters of CPUs have been the standard approach for such simulations, but an emerging approach is the use of Graphics Processing Units (GPUs), which deliver impressive computing performance compared to CPUs. Recently there has been great interest in the scientific computing community to use GPUs for general-purpose computation (such as the numerical solution of PDEs) rather than graphics rendering. To explore the use of GPUs for CFD simulations, an incompressible Navier-Stokes solver was developed for a GPU. This solver is capable of simulating unsteady laminar flows or performing a LES or DNS of turbulent ows. The Navier-Stokes equations are solved via a fractional-step method and are spatially discretized using the finite volume method on a Cartesian mesh. An immersed boundary method based on a ghost cell treatment was developed to handle flow past complex geometries. The implementation of these numerical methods had to suit the architecture of the GPU, which is designed for massive multithreading. The details of this implementation will be described, along with strategies for performance optimization. Validation of the GPU-based solver was performed for fundamental bench-mark problems, and a performance assessment indicated that the solver was over an order-of-magnitude faster compared to a CPU. The GPU-based Navier-Stokes solver was used to study film-cooling flows via Large Eddy Simulation. In modern gas turbine engines, the film-cooling method is used to protect turbine blades from hot combustion gases. Therefore, understanding the physics of

  16. Boundary conditions for simulating large SAW devices using ANSYS.

    PubMed

    Peng, Dasong; Yu, Fengqi; Hu, Jian; Li, Peng

    2010-08-01

    In this report, we propose improved substrate left and right boundary conditions for simulating SAW devices using ANSYS. Compared with the previous methods, the proposed method can greatly reduce computation time. Furthermore, the longer the distance from the first reflector to the last one, the more computation time can be reduced. To verify the proposed method, a design example is presented with device center frequency 971.14 MHz.

  17. Towards accurate quantum simulations of large systems with small computers

    NASA Astrophysics Data System (ADS)

    Yang, Yonggang

    2017-01-01

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems.

  18. Towards accurate quantum simulations of large systems with small computers.

    PubMed

    Yang, Yonggang

    2017-01-24

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems.

  19. Manufacturing Process Simulation of Large-Scale Cryotanks

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Phillips, Steven; Griffin, Brian; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA's Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aid in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI.

  20. Towards accurate quantum simulations of large systems with small computers

    PubMed Central

    Yang, Yonggang

    2017-01-01

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems. PMID:28117366

  1. Determination of 235U enrichment with a large volume CZT detector

    NASA Astrophysics Data System (ADS)

    Mortreau, Patricia; Berndt, Reinhard

    2006-01-01

    Room-temperature CdZnTe and CdTe detectors have been routinely used in the field of Nuclear Safeguards for many years [Ivanov et al., Development of large volume hemispheric CdZnTe detectors for use in safeguards applications, ESARDA European Safeguards Research and Development Association, Le Corum, Montpellier, France, 1997, p. 447; Czock and Arlt, Nucl. Instr. and Meth. A 458 (2001) 175; Arlt et al., Nucl. Instr. and Meth. A 428 (1999) 127; Lebrun et al., Nucl. Instr. and Meth. A 448 (2000) 598; Aparo et al., Development and implementation of compact gamma spectrometers for spent fuel measurements, in: Proceedings, 21st Annual ESARDA, 1999; Arlt and Rudsquist, Nucl. Instr. and Meth. A 380 (1996) 455; Khusainov et al., High resolution pin type CdTe detectors for the verification of nuclear material, in: Proceedings, 17th Annual ESARDA European Safeguards Research and Development Association, 1995; Mortreau and Berndt, Nucl. Instr. and Meth. A 458 (2001) 183; Ruhter et al., UCRL-JC-130548, 1998; Abbas et al., Nucl. Instr. and Meth. A 405 (1998) 153; Ruhter and Gunnink, Nucl. Instr. and Meth. A 353 (1994) 716]. Due to their performance and small size, they are ideal detectors for hand-held applications such as verification of spent and fresh fuel, U/Pu attribute tests as well as for the determination of 235U enrichment. The hemispherical CdZnTe type produced by RITEC (Riga, Latvia) [Ivanov et al., 1997] is the most widely used detector in the field of inspection. With volumes ranging from 2 to 1500 mm 3, their spectral performance is such that the use of electronic processing to correct the pulse shape is not required. This paper reports on the work carried out with a large volume (15×15×7.5 mm 3) and high efficiency hemispherical CdZnTe detector for the determination of 235U enrichment. The measurements were made with certified uranium samples whose enrichment ranging from 0.31% to 92.42%, cover the whole range of in-field measurement conditions. The interposed

  2. Simulation of Hard Shadows on Large Spherical Terrains

    NASA Astrophysics Data System (ADS)

    Aslandere, Turgay; Flatken, Markus; Gerndt, Andreas

    2016-12-01

    Real-time rendering of high precision shadows using digital terrain models as input data is a challenging task. Especially when interactivity is targeted and level of detail data structures are utilized to tackle huge amount of data. In this paper, we present a real-time rendering approach for the computation of hard shadows using large scale digital terrain data obtained by satellite imagery. Our approach is based on an extended horizon mapping algorithm that avoids costly pre-computations and ensures high accuracy. This algorithm is further developed to handle large data. The proposed algorithms take the surface curvature of the large spherical bodies into account during the computation. The performance issues are discussed and the results are presented. The generated images can be exploited in 3D research and aerospace related areas.

  3. Large Eddy Simulation of Air Escape through a Hospital Isolation Room Single Hinged Doorway—Validation by Using Tracer Gases and Simulated Smoke Videos

    PubMed Central

    Saarinen, Pekka E.; Kalliomäki, Petri; Tang, Julian W.; Koskela, Hannu

    2015-01-01

    The use of hospital isolation rooms has increased considerably in recent years due to the worldwide outbreaks of various emerging infectious diseases. However, the passage of staff through isolation room doors is suspected to be a cause of containment failure, especially in case of hinged doors. It is therefore important to minimize inadvertent contaminant airflow leakage across the doorway during such movements. To this end, it is essential to investigate the behavior of such airflows, especially the overall volume of air that can potentially leak across the doorway during door-opening and human passage. Experimental measurements using full-scale mock-ups are expensive and labour intensive. A useful alternative approach is the application of Computational Fluid Dynamics (CFD) modelling using a time-resolved Large Eddy Simulation (LES) method. In this study simulated air flow patterns are qualitatively compared with experimental ones, and the simulated total volume of air that escapes is compared with the experimentally measured volume. It is shown that the LES method is able to reproduce, at room scale, the complex transient airflows generated during door-opening/closing motions and the passage of a human figure through the doorway between two rooms. This was a basic test case that was performed in an isothermal environment without ventilation. However, the advantage of the CFD approach is that the addition of ventilation airflows and a temperature difference between the rooms is, in principle, a relatively simple task. A standard method to observe flow structures is dosing smoke into the flow. In this paper we introduce graphical methods to simulate smoke experiments by LES, making it very easy to compare the CFD simulation to the experiments. The results demonstrate that the transient CFD simulation is a promising tool to compare different isolation room scenarios without the need to construct full-scale experimental models. The CFD model is able to reproduce

  4. Large Eddy Simulation of Air Escape through a Hospital Isolation Room Single Hinged Doorway--Validation by Using Tracer Gases and Simulated Smoke Videos.

    PubMed

    Saarinen, Pekka E; Kalliomäki, Petri; Tang, Julian W; Koskela, Hannu

    2015-01-01

    The use of hospital isolation rooms has increased considerably in recent years due to the worldwide outbreaks of various emerging infectious diseases. However, the passage of staff through isolation room doors is suspected to be a cause of containment failure, especially in case of hinged doors. It is therefore important to minimize inadvertent contaminant airflow leakage across the doorway during such movements. To this end, it is essential to investigate the behavior of such airflows, especially the overall volume of air that can potentially leak across the doorway during door-opening and human passage. Experimental measurements using full-scale mock-ups are expensive and labour intensive. A useful alternative approach is the application of Computational Fluid Dynamics (CFD) modelling using a time-resolved Large Eddy Simulation (LES) method. In this study simulated air flow patterns are qualitatively compared with experimental ones, and the simulated total volume of air that escapes is compared with the experimentally measured volume. It is shown that the LES method is able to reproduce, at room scale, the complex transient airflows generated during door-opening/closing motions and the passage of a human figure through the doorway between two rooms. This was a basic test case that was performed in an isothermal environment without ventilation. However, the advantage of the CFD approach is that the addition of ventilation airflows and a temperature difference between the rooms is, in principle, a relatively simple task. A standard method to observe flow structures is dosing smoke into the flow. In this paper we introduce graphical methods to simulate smoke experiments by LES, making it very easy to compare the CFD simulation to the experiments. The results demonstrate that the transient CFD simulation is a promising tool to compare different isolation room scenarios without the need to construct full-scale experimental models. The CFD model is able to reproduce

  5. Computationally Efficient Modeling and Simulation of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Jain, Jitesh (Inventor); Cauley, Stephen F (Inventor); Li, Hong (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Vankataramanan (Inventor)

    2014-01-01

    A system for simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof, including a processor, and a memory, the processor configured to perform obtaining a matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure, the element values for each matrix including inductance L and inverse capacitance P, obtaining an adjacency matrix A associated with the interconnect structure, storing the matrices X, Y, and A in the memory, and performing numerical integration to solve first and second equations.

  6. Parallelization Strategies for Large Particle Simulations in Astrophysics

    NASA Astrophysics Data System (ADS)

    Pattabiraman, Bharath

    The modeling of collisional N-body stellar systems is a topic of great current interest in several branches of astrophysics and cosmology. These systems are dominated by the physics of relaxation, the collective effect of many weak, random gravitational encounters between stars. They connect directly to our understanding of star clusters, and to the formation of exotic objects such as X-ray binaries, pulsars, and massive black holes. As a prototypical multi-physics, multi-scale problem, the numerical simulation of such systems is computationally intensive, and can only be achieved through high-performance computing. The goal of this thesis is to present parallelization and optimization strategies that can be used to develop efficient computational tools for simulating collisional N-body systems. This leads to major advances: 1) From an astrophysics perspective, these tools enable the study of new physical regimes out of reach by previous simulations. They also lead to much more complete parameter space exploration, allowing direct comparison of numerical results to observational data. 2) On the high-performance computing front, efficient parallelization of a multi-component application requires the meticulous redesign of the various components, as well as innovative parallelization techniques. Many of the challenges faced in this process lie at the very heart of high-performance computing research, including achieving optimal load balancing, maximizing utilization of computational resources, and making effective use of different parallel platforms. For modeling collisional N-body systems, a Monte Carlo approach provides ideal balance between speed and accuracy, as opposed to the more accurate but less scalable direct N-body method. We describe the development of a new version of the Cluster Monte Carlo (CMC) code capable of simulating systems with a realistic number of stars, while accounting for all important physical processes. This efficient and scalable

  7. Earth matter effects on supernova neutrinos in large-volume detectors

    NASA Astrophysics Data System (ADS)

    Borriello, Enrico

    2013-04-01

    Neutrino oscillations in the Earth matter may introduce peculiar modulations in the supernova (SN) neutrino spectra. The detection of this effect has been proposed as diagnostic tool for the neutrino mass hierarchy. We perform an updated study on the observability of this effect at large next-generation underground detectors (i.e., 0.4 Mton water Cherenkov, 50 kton scintillation and 100 kton liquid Argon detectors) based on neutrino fluxes from state-of-the-art SN simulations and accounting for statistical fluctuations via Montecarlo simulations. Since the average energies predicted by recent simulations are lower than previously expected and a tendency towards the equalization of the neutrino fluxes appears during the SN cooling phase, the detection of the Earth matter effect will be more challenging than expected from previous studies. We find that none of the proposed detectors shall be able to detect the Earth modulation for the neutrino signal of a typical galactic SN at 10 kpc. It should be observable in a 100 kton liquid Argon detector for a SN at few kpc and all three detectors would clearly see the Earth signature for very close-by stars only (d˜200 pc).

  8. Major risk from rapid, large-volume landslides in Europe (EU Project RUNOUT)

    NASA Astrophysics Data System (ADS)

    Kilburn, Christopher R. J.; Pasuto, Alessandro

    2003-08-01

    Project RUNOUT has investigated methods for reducing the risk from large-volume landslides in Europe, especially those involving rapid rates of emplacement. Using field data from five test sites (Bad Goisern and Köfels in Austria, Tessina and Vajont in Italy, and the Barranco de Tirajana in Gran Canaria, Spain), the studies have developed (1) techniques for applying geomorphological investigations and optical remote sensing to map landslides and their evolution; (2) analytical, numerical, and cellular automata models for the emplacement of sturzstroms and debris flows; (3) a brittle-failure model for forecasting catastrophic slope failure; (4) new strategies for integrating large-area Global Positioning System (GPS) arrays with local geodetic monitoring networks; (5) methods for raising public awareness of landslide hazards; and (6) Geographic Information System (GIS)-based databases for the test areas. The results highlight the importance of multidisciplinary studies of landslide hazards, combining subjects as diverse as geology and geomorphology, remote sensing, geodesy, fluid dynamics, and social profiling. They have also identified key goals for an improved understanding of the physical processes that govern landslide collapse and runout, as well as for designing strategies for raising public awareness of landslide hazards and for implementing appropriate land management policies for reducing landslide risk.

  9. Large-volume ultralow background germanium-germanium coincidence/anticoincidence gamma-ray spectrometer

    SciTech Connect

    Brodzinski, R.L.; Brown, D.P.; Evans, J.C. Jr.; Hensley, W.K.; Reeves, J.H.; Wogman, N.A.; Avignone, F.T. III; Miley, H.S.; Moore, R.S.

    1984-03-01

    A large volume (approx. 1440 cm/sup 3/), multicrystal, high resolution intrinsic germanium gamma-ray spectrometer has been designed based on 3 generations of experiments. The background from construction materials used in standard commercial configurations has been reduced by at least two orders of magnitude. Data taken with a 132 cm/sup 3/ prototype detector, installed in the Homestake Gold Mine, are presented. The first application of the full scale detector will be an ultrasensitive search for neutrinoless and two-neutrino double beta decay of /sup 76/Ge. The size and geometrical configuration of the crystals is chosen to optimize detection of double decay to the first excited state of /sup 76/Se with subsequent emission of a 559 keV gamma ray. The detector will be sufficiently sensitive for measuring the neutrinoless double beta decay to the ground state to establish a minimum half life of 1.4.10/sup 24/ y. Application of the large spectrometer system to the analysis of low level environmental and biological samples is discussed.

  10. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors.

    PubMed

    Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L; Nicolau, Alex; Veidenbaum, Alexander V

    2009-01-01

    Neural network simulators that take into account the spiking behavior of neurons are useful for studying brain mechanisms and for various neural engineering applications. Spiking Neural Network (SNN) simulators have been traditionally simulated on large-scale clusters, super-computers, or on dedicated hardware architectures. Alternatively, Compute Unified Device Architecture (CUDA) Graphics Processing Units (GPUs) can provide a low-cost, programmable, and high-performance computing platform for simulation of SNNs. In this paper we demonstrate an efficient, biologically realistic, large-scale SNN simulator that runs on a single GPU. The SNN model includes Izhikevich spiking neurons, detailed models of synaptic plasticity and variable axonal delay. We allow user-defined configuration of the GPU-SNN model by means of a high-level programming interface written in C++ but similar to the PyNN programming interface specification. PyNN is a common programming interface developed by the neuronal simulation community to allow a single script to run on various simulators. The GPU implementation (on NVIDIA GTX-280 with 1 GB of memory) is up to 26 times faster than a CPU version for the simulation of 100K neurons with 50 Million synaptic connections, firing at an average rate of 7 Hz. For simulation of 10 Million synaptic connections and 100K neurons, the GPU SNN model is only 1.5 times slower than real-time. Further, we present a collection of new techniques related to parallelism extraction, mapping of irregular communication, and network representation for effective simulation of SNNs on GPUs. The fidelity of the simulation results was validated on CPU simulations using firing rate, synaptic weight distribution, and inter-spike interval analysis. Our simulator is publicly available to the modeling community so that researchers will have easy access to large-scale SNN simulations.

  11. Tracking reactive pollutants in large groundwater systems by particle-based simulations

    NASA Astrophysics Data System (ADS)

    Kalbacher, T.; Sun, Y.; He, W.; Jang, E.; Delfs, J.; Shao, H.; Park, C.; Kolditz, O.

    2013-12-01

    Worldwide, great amounts of human and financial resources are being invested to protect and secure clean water resources. Especially in arid and semi-arid regions civilization depends on the availability of freshwater from the underlying aquifer systems where water quality and quantity are often dramatically deteriorating. Main reasons for the mitigation of water quality are extensive fertilizer use in agriculture and waste water from cities and various industries. It may be assumed that climate and demographic changes will add further stress to this situation in the future. One way to assess water quality is to model the coupled groundwater and chemical system, e.g.to assess the impact of possible contaminant precipitation, absorption and migration in subsurface media. Currently, simulating such scenarios at large scales is a challenging task due to the extreme computational load, numerical stability issues, scale-dependencies and spatially and temporally infrequently distributed or missing data, which can lead e.g. to in appropriate model simplifications and additionally uncertainties in the results. The simulation of advective-dispersive mass transport is usually solved by standard finite differences, finite element or finite volume methods. Particle tracking is an alternative method and commonly used e.g. to delineate contaminant travel times, with the advantage of being numerically more stable and computational less expensive. Since particle tracking is used to evaluate groundwater residence times, it seems natural and straightforward to include reactive processes to track geochemical changes as well. The main focus of the study is the evaluation of reactive transport processes at large scales. Therefore, a number of new methods have been developed and implemented into the OpenGeoSys project, which is a scientific, FEM-based, open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media (www

  12. The Jefferson Project: Large-eddy simulations of a watershed

    NASA Astrophysics Data System (ADS)

    Watson, C.; Cipriani, J.; Praino, A. P.; Treinish, L. A.; Tewari, M.; Kolar, H.

    2015-12-01

    The Jefferson Project is a new endeavor at Lake George, NY by IBM Research, Rensselaer Polytechnic Institute (RPI) and The Fund for Lake George. Lake George is an oligotrophic lake - one of low nutrients - and a 30-year study recently published by RPI's Darrin Fresh Water Institute highlighted the renowned water quality is declining from the injection of salt (from runoff), algae, and invasive species. In response, the Jefferson Project is developing a system to provide extensive data on relevant physical, chemical and biological parameters that drive ecosystem function. The system will be capable of real-time observations and interactive modeling of the atmosphere, watershed hydrology, lake circulation and food web dynamics. In this presentation, we describe the development of the operational forecast system used to simulate the atmosphere in the model stack, Deep ThunderTM (a configuration of the ARW-WRF model). The model performs 48-hr forecasts twice daily in a nested configuration, and in this study we present results from ongoing tests where the innermost domains are dx = 333-m and 111-m. We discuss the model's ability to simulate boundary layer processes, lake surface conditions (an input into the lake model), and precipitation (an input into the hydrology model) during different weather regimes, and the challenges of data assimilation and validation at this scale. We also explore the potential for additional nests over select regions of the watershed to better capture turbulent boundary layer motions.

  13. Large Scale Three-dimensional Magnetohydrodynamics Simulations of Protostellar Jets

    NASA Astrophysics Data System (ADS)

    Cai, Kai; Staff, J. E.; Niebergal, B. P.; Pudritz, R. E.; Ouyed, R.

    2007-05-01

    High resolution spectra of protostellar jets obtained by the Hubble Space Telescope (HST) during the past few years, especially those near the jet base, have made it possible for a direct comparison with jet simulation results. Using Zeus-MP code, we extend our three-dimensional time-dependent calculations of such jets launched from the surface of Keplerian accretion disks to physical scales that are probed by the HST observations. We produce velocity channel maps and other diagnostics of our jet simulations that can be directly compared with the observations. In particular, the observations of jet rotation and velocity structure on these larger scales (50 AU) can be used to constrain the physics of the disk wind at its source, including information about the magnetic field configuration on the disk as well as the mass loading of the jet by the underlying accretion disk. Our approach will ultimately allow the observations to put strong constraints on the nature of the central engine. This work is supported by a grant from NSERC. K.C. acknowledges support from a CITA National Fellowship.

  14. Flow simulation of a Pelton bucket using finite volume particle method

    NASA Astrophysics Data System (ADS)

    Vessaz, C.; Jahanbakhsh, E.; Avellan, F.

    2014-03-01

    The objective of the present paper is to perform an accurate numerical simulation of the high-speed water jet impinging on a Pelton bucket. To reach this goal, the Finite Volume Particle Method (FVPM) is used to discretize the governing equations. FVPM is an arbitrary Lagrangian-Eulerian method, which combines attractive features of Smoothed Particle Hydrodynamics and conventional mesh-based Finite Volume Method. This method is able to satisfy free surface and no-slip wall boundary conditions precisely. The fluid flow is assumed weakly compressible and the wall boundary is represented by one layer of particles located on the bucket surface. In the present study, the simulations of the flow in a stationary bucket are investigated for three different impinging angles: 72°, 90° and 108°. The particles resolution is first validated by a convergence study. Then, the FVPM results are validated with available experimental data and conventional grid-based Volume Of Fluid simulations. It is shown that the wall pressure field is in good agreement with the experimental and numerical data. Finally, the torque evolution and water sheet location are presented for a simulation of five rotating Pelton buckets.

  15. Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The principal objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. A summary of work accomplished during the last six months is presented.

  16. The oligocene Lund Tuff, Great Basin, USA: A very large volume monotonous intermediate

    USGS Publications Warehouse

    Maughan, L.L.; Christiansen, E.H.; Best, M.G.; Gromme, C.S.; Deino, A.L.; Tingey, D.G.

    2002-01-01

    Unusual monotonous intermediate ignimbrites consist of phenocryst-rich dacite that occurs as very large volume (> 1000 km3) deposits that lack systematic compositional zonation, comagmatic rhyolite precursors, and underlying plinian beds. They are distinct from countless, usually smaller volume, zoned rhyolite-dacite-andesite deposits that are conventionally believed to have erupted from magma chambers in which thermal and compositional gradients were established because of sidewall crystallization and associated convective fractionation. Despite their great volume, or because of it, monotonous intermediates have received little attention. Documentation of the stratigraphy, composition, and geologic setting of the Lund Tuff - one of four monotonous intermediate tuffs in the middle-Tertiary Great Basin ignimbrite province - provides insight into its unusual origin and, by implication, the origin of other similar monotonous intermediates. The Lund Tuff is a single cooling unit with normal magnetic polarity whose volume likely exceeded 3000 km3. It was emplaced 29.02 ?? 0.04 Ma in and around the coeval White Rock caldera which has an unextended north-south diameter of about 50 km. The tuff is monotonous in that its phenocryst assemblage is virtually uniform throughout the deposit: plagioclase > quartz ??? hornblende > biotite > Fe-Ti oxides ??? sanidine > titanite, zircon, and apatite. However, ratios of phenocrysts vary by as much as an order of magnitude in a manner consistent with progressive crystallization in the pre-eruption chamber. A significant range in whole-rock chemical composition (e.g., 63-71 wt% SiO2) is poorly correlated with phenocryst abundance. These compositional attributes cannot have been caused wholly by winnowing of glass from phenocrysts during eruption, as has been suggested for the monotonous intermediate Fish Canyon Tuff. Pumice fragments are also crystal-rich, and chemically and mineralogically indistinguishable from bulk tuff. We

  17. Large-eddy simulation of supercritical fluid flow and combustion

    NASA Astrophysics Data System (ADS)

    Huo, Hongfa

    The present study focuses on the modeling and simulation of injection, mixing, and combustion of real fluids at supercritical conditions. The objectives of the study are: (1) to establish a unified theoretical framework that can be used to study the turbulent combustion of real fluids; (2) to implement the theoretical framework and conduct numerical studies with the aim of improving the understanding of the flow and combustion dynamics at conditions representative of contemporary liquid-propellant rocket engine operation; (3) to identify the key design parameters and the flow variables which dictate the dynamics characteristics of swirl- and shear- coaxial injectors. The theoretical and numerical framework is validated by simulating the Sandia Flame D. The calculated axial and radial profiles of velocity, temperature, and mass fractions of major species are in reasonably good agreement with the experimental measurements. The conditionally averaged mass fraction profiles agree very well with the experimental results at different axial locations. The validated model is first employed to examine the flow dynamics of liquid oxygen in a pressure swirl injector at supercritical conditions. Emphasis is placed on analyzing the effects of external excitations on the dynamic response of the injector. The high-frequency fluctuations do not significantly affect the flow field as they are dissipated shortly after being introduced into the flow. However, the lower-frequency fluctuations are amplified by the flow. As a result, the film thickness and the spreading angle at the nozzle exit fluctuate strongly for low-frequency external excitations. The combustion of gaseous oxygen/gaseous hydrogen in a high-pressure combustion chamber for a shear coaxial injector is simulated to assess the accuracy and the credibility of the computer program when applied to a sub-scale model of a combustor. The predicted heat flux profile is compared with the experimental and numerical studies. The

  18. Hematopoietic progenitor cell large volume leukapheresis (LVL) on the Fenwal Amicus blood separator.

    PubMed

    Burgstaler, Edwin A; Pineda, Alvaro A; Winters, Jeffrey L

    2004-01-01

    A technique for large volume leukapheresis (LVL) for hematopoietic progenitor cell (HPC) collection using the Fenwal Amicus is presented. It was compared to standard collections (STD) with regard to CD34+ cell yields and cross-cellular content. Optimal cycle volumes and machine settings were evaluated for LVL procedures. A total of 68 patients underwent 80 HPC collection procedures. Because of differences in CD34+ cell yields associated with peripheral white blood cell counts (WBC), the comparison was divided into groups of 20 with WBC < or =35 x 10(9)/L (< or =35 K) and those >35 x 10(9)/L (>35 K). Baseline CD34+ cell counts (peripheral count when patient started HPC collection) were used (median 18-23 cells/microl). Significantly more whole blood (corrected for anticoagulant) was processed with LVL (LVL 20 l vs. STD 13.5 l). For < or =35 K, median CD34+ x 10(6), WBC x 10(9), RBC ml, Plt x 10(11) yields/collection were 183, 21.2, 14, 0.8, respectively, for STD vs. 307, 22.1, 11, 1.0, respectively, for LVL. For >35 K, median CD34+ x 10(6), WBC x 10(9), RBC ml, Plt x 10(11) yields/collection were 189, 32.7, 15, 1.4, respectively, for STD vs. 69, 40.8, 21, 1.3, respectively, for LVL. We have described a method of LVL using the Amicus that, in patients with pre-procedure WBC < or =35 x 10(9)/L, collects more CD34+ cells than a standard procedure with acceptable cross-cellular content. This method is not recommended when pre-procedure WBC counts are >35 x 10(9)/L.

  19. A perspective on large eddy simulation of problems in the nuclear industry

    SciTech Connect

    Hassan, Y.A.; Pruitt, J.M.; Steininger, D.A.

    1995-12-01

    Because of the complex nature of coolant flow in nuclear reactors, current subchannel methods for light water reactor analysis are insufficient. The large eddy simulation method has been proposed as a computational tool for subchannel analysis. In large eddy simulation, large flow structures are computed while small scales are modeled, thereby decreasing computational time as compared with direct numerical simulation methods. Large eddy simulation has been used in complex geometry calculations providing good results in tube bundle cross-flow situations in steam generators. It is proposed that the large eddy simulation method be extended from single- to two-phase flow calculations to help in the prediction of the thermal diffusion of energy between adjacent subchannels.

  20. Computationally efficient modeling and simulation of large scale systems

    NASA Technical Reports Server (NTRS)

    Jain, Jitesh (Inventor); Cauley, Stephen F. (Inventor); Li, Hong (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Venkataramanan (Inventor)

    2012-01-01

    A method of simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof. A matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure are obtained where the element values for each matrix include inductance L and inverse capacitance P. An adjacency matrix A associated with the interconnect structure is obtained. Numerical integration is used to solve first and second equations, each including as a factor the product of the inverse matrix X.sup.-1 and at least one other matrix, with first equation including X.sup.-1Y, X.sup.-1A, and X.sup.-1P, and the second equation including X.sup.-1A and X.sup.-1P.

  1. Methodology for analysis and simulation of large multidisciplinary problems

    NASA Technical Reports Server (NTRS)

    Russell, William C.; Ikeda, Paul J.; Vos, Robert G.

    1989-01-01

    The Integrated Structural Modeling (ISM) program is being developed for the Air Force Weapons Laboratory and will be available for Air Force work. Its goal is to provide a design, analysis, and simulation tool intended primarily for directed energy weapons (DEW), kinetic energy weapons (KEW), and surveillance applications. The code is designed to run on DEC (VMS and UNIX), IRIS, Alliant, and Cray hosts. Several technical disciplines are included in ISM, namely structures, controls, optics, thermal, and dynamics. Four topics from the broad ISM goal are discussed. The first is project configuration management and includes two major areas: the software and database arrangement and the system model control. The second is interdisciplinary data transfer and refers to exchange of data between various disciplines such as structures and thermal. Third is a discussion of the integration of component models into one system model, i.e., multiple discipline model synthesis. Last is a presentation of work on a distributed processing computing environment.

  2. Computationally efficient modeling and simulation of large scale systems

    NASA Technical Reports Server (NTRS)

    Jain, Jitesh (Inventor); Cauley, Stephen F. (Inventor); Li, Hong (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Venkataramanan (Inventor)

    2010-01-01

    A method of simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof. A matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure are obtained where the element values for each matrix include inductance L and inverse capacitance P. An adjacency matrix A associated with the interconnect structure is obtained. Numerical integration is used to solve first and second equations, each including as a factor the product of the inverse matrix X.sup.1 and at least one other matrix, with first equation including X.sup.1Y, X.sup.1A, and X.sup.1P, and the second equation including X.sup.1A and X.sup.1P.

  3. Neutral Buoyancy Simultor (NBS) NB-1 Large Mass Transfer simulation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Pictured is a Massachusetts Institute of Technology (MIT) student working in a spacesuit on the Experimental Assembly of Structures in Extravehicular Activity (EASE) project which was developed as a joint effort between MFSC and MIT. The EASE experiment required that crew members assemble small components to form larger components, working from the payload bay of the space shuttle.

  4. Storage and eruption of large volumes of rhyolite lava: Example from Solfatara Plateau, Yellowstone Caldera

    NASA Astrophysics Data System (ADS)

    Befus, K.; Gardner, J. E.; Zinke, R.

    2010-12-01

    The cataclysmic volcanic history of Yellowstone caldera has been extensively documented in both popular media and scholarly journals. High-silica magmas should erupt explosively because of their high viscosity and volatile content; however, numerous passively-erupted, large-volume rhyolite lava flows have also erupted from Yellowstone caldera. We use petrologic observations of one such flow, the Solfatara Plateau obsidian lava, to provide insights into the eruptive dynamics and pre-eruptive magmatic conditions of large-volume rhyolite lava. Solfatara Plateau, a 7 km3 high-silica rhyolite lava that extends 4-15 km from vent, erupted 103±8 ka within the Yellowstone caldera1. Quartz and sanidine are the dominant phenocrysts, with crystal contents of 5-10% throughout. FTIR analyses of glass inclusions in quartz and sanidine phenocrysts indicate that pre-eruptive dissolved volatile contents were up to 3.0 wt. % H2O and 250 ppm CO2. Myrmekite blebs partially envelop quartz and sanidine phenocrysts in all samples from along the margins of the flow (up to 3 km from flow front). Sanidines in samples from near vent are unzoned at Or49±2. Those at the flow front have similar cores, but rims are more sodic (Or44±6). Alkali feldspars in myrmekite range from Or27 to Or50. Petrologic observations, such as heavily embayed quartz phenocrysts and dissolution of myrmekite indicate disequilibrium within the system, likely as a result of significant heating that caused portions of the magma body to go from near-solidus to near-liquidus conditions prior to erupting. When it did erupt, volatile loss during eruptive ascent led to undercooling and significant microlite crystallization of Fe-Ti oxide and clinopyroxene microlites. Fe-Ti microlites occur as roughly equidimensional crystals, 1-10 µm across, as well as high-aspect-ratio needles, 3-60 µm long. Clinopyroxene microlites occur primarily as individual prismatic crystals, but also occur as linked, curved chains or as overgrowths

  5. Physiological and Psychological Changes Following Liposuction of Large Volumes of Fat in Overweight and Obese Women

    PubMed Central

    Geliebter, Allan; Krawitz, Emily; Ungredda, Tatiana; Peresechenski, Ella; Giese, Sharon Y.

    2016-01-01

    Background Liposuction can remove a substantial amount of body fat. We investigated the effects of liposuction of large volumes of fat on anthropometrics, body composition (BIA), metabolic hormones, and psychological measures in overweight/obese women. To our knowledge, this is the first study to examine both physiological and psychological changes following liposuction of large volumes of fat in humans. Method Nine premenopausal healthy overweight/obese women (age = 35.9 ± 7.1 SD, weight = 84.4 kg ± 13.6, BMI = 29.9 kg/m2 ± 2.9) underwent liposuction, removing 3.92 kg ± 1.04 SD of fat. Following an overnight fast, height, weight, waist, and hip circumferences were measured at baseline (one week pre-surgery) and post-surgery (wk 1,4,12). Blood samples were drawn for fasting concentrations of glucose, insulin, leptin, and ghrelin. The Body Shape Questionnaire (BSQ), Body Dysmorphic Disorder (BDD) Examination Self-Report (BDDE-SR), and Zung Self-Rating Depression Scale (ZDS) were administered. Results Body weight, BMI, waist circumference, and body fat consistently decreased over time (p < .05). Glucose did not change significantly, but insulin decreased from wk 1 to wk 12 (p < .05). Leptin decreased from baseline to wk 1 (p = .01); ghrelin increased but not significantly. Changes in body fat and waist circumference (baseline to wk 1) correlated positively with changes in insulin during that period, and correlated inversely with changes in ghrelin (p < .05). BSQ scores decreased significantly over time (p = .004), but scores for BDDE-SR (p = .10) and ZDS (p = .24) did not change significantly. Conclusion Liposuction led to significant decreases in body weight and fat, waist circumference, and leptin levels. Changes in body fat and waist circumference correlated with concurrent changes in the adipose-related hormones, insulin and ghrelin (baseline to wk 1), and body shape perception improved. Thus, besides the obvious cosmetic effects, liposuction led to several

  6. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  7. Large Eddy Simulation of wind turbine wakes: detailed comparisons of two codes focusing on effects of numerics and subgrid modeling

    NASA Astrophysics Data System (ADS)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-01

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  8. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    SciTech Connect

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  9. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  10. A dynamic mixed subgrid-scale model for large eddy simulation on unstructured grids: application to turbulent pipe flows

    NASA Astrophysics Data System (ADS)

    Lampitella, P.; Colombo, E.; Inzoli, F.

    2014-04-01

    The paper presents a consistent large eddy simulation (LES) framework which is particularly suited for implicitly filtered LES with unstructured finite volume (FV) codes. From the analysis of the subgrid-scale (SGS) stress tensor arising in this new LES formulation, a novel form of scale-similar SGS model is proposed and combined with a classical eddy viscosity term. The constants in the resulting mixed model are then computed trough a new, cheaper, dynamic procedure based on a consistent redefinition of the Germano identity within the new LES framework. The dynamic mixed model is implemented in a commercial, unstructured, finite volume solver and numerical tests are performed on the turbulent pipe flow at Reτ = 320-1142, showing the flexibility and improvements of the approach over classical modeling strategies. Some limitations of the proposed implementation are also highlighted.

  11. Implementing high-fidelity simulations with large groups of nursing students.

    PubMed

    Hooper, Barbara; Shaw, Luanne; Zamzam, Rebekah

    2015-01-01

    Nurse educators are increasing the use of simulation as a teaching strategy. Simulations are conducted typically with a small group of students. This article describes the process for implementing 6 high-fidelity simulations with a large group of undergraduate nursing students. The goal was to evaluate if student knowledge increased on postsimulation quiz scores when only a few individuals actively participated in the simulation while the other students observed.

  12. Feasibility study for a numerical aerodynamic simulation facility. Volume 2: Hardware specifications/descriptions

    NASA Technical Reports Server (NTRS)

    Green, F. M.; Resnick, D. R.

    1979-01-01

    An FMP (Flow Model Processor) was designed for use in the Numerical Aerodynamic Simulation Facility (NASF). The NASF was developed to simulate fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The facility is applicable to studying aerodynamic and aircraft body designs. The following general topics are discussed in this volume: (1) FMP functional computer specifications; (2) FMP instruction specification; (3) standard product system components; (4) loosely coupled network (LCN) specifications/description; and (5) three appendices: performance of trunk allocation contention elimination (trace) method, LCN channel protocol and proposed LCN unified second level protocol.

  13. Large eddy simulation and direct numerical simulation of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Adumitroaie, V.; Frankel, S. H.; Madnia, C. K.; Givi, P.

    1993-01-01

    The objective of this research is to make use of Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first phase of this research conducted within the past three years have been directed in several issues pertaining to intricate physics of turbulent reacting flows. In our previous 5 semi-annual reports submitted to NASA LaRC, as well as several technical papers in archival journals, the results of our investigations have been fully described. In this progress report which is different in format as compared to our previous documents, we focus only on the issue of LES. The reason for doing so is that LES is the primary issue of interest to our Technical Monitor and that our other findings were needed to support the activities conducted under this prime issue. The outcomes of our related investigations, nevertheless, are included in the appendices accompanying this report. The relevance of the materials in these appendices are, therefore, discussed only briefly within the body of the report. Here, results are presented of a priori and a posterior analyses for validity assessments of assumed Probability Density Function (PDF) methods as potential subgrid scale (SGS) closures for LES of turbulent reacting flows. Simple non-premixed reacting systems involving an isothermal reaction of the type A + B yields Products under both chemical equilibrium and non-equilibrium conditions are considered. A priori analyses are conducted of a homogeneous box flow, and a spatially developing planar mixing layer to investigate the performance of the Pearson Family of PDF's as SGS models. A posteriori analyses are conducted of the mixing layer using a hybrid one-equation Smagorinsky/PDF SGS closure. The Smagorinsky closure augmented by the solution of the subgrid turbulent kinetic energy (TKE) equation is employed to account for hydrodynamic fluctuations, and the PDF is employed for modeling the

  14. Large eddy simulation and direct numerical simulation of high speed turbulent reacting flows

    NASA Astrophysics Data System (ADS)

    Adumitroaie, V.; Frankel, S. H.; Madnia, C. K.; Givi, P.

    The objective of this research is to make use of Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first phase of this research conducted within the past three years have been directed in several issues pertaining to intricate physics of turbulent reacting flows. In our previous 5 semi-annual reports submitted to NASA LaRC, as well as several technical papers in archival journals, the results of our investigations have been fully described. In this progress report which is different in format as compared to our previous documents, we focus only on the issue of LES. The reason for doing so is that LES is the primary issue of interest to our Technical Monitor and that our other findings were needed to support the activities conducted under this prime issue. The outcomes of our related investigations, nevertheless, are included in the appendices accompanying this report. The relevance of the materials in these appendices are, therefore, discussed only briefly within the body of the report. Here, results are presented of a priori and a posterior analyses for validity assessments of assumed Probability Density Function (PDF) methods as potential subgrid scale (SGS) closures for LES of turbulent reacting flows. Simple non-premixed reacting systems involving an isothermal reaction of the type A + B yields Products under both chemical equilibrium and non-equilibrium conditions are considered. A priori analyses are conducted of a homogeneous box flow, and a spatially developing planar mixing layer to investigate the performance of the Pearson Family of PDF's as SGS models. A posteriori analyses are conducted of the mixing layer using a hybrid one-equation Smagorinsky/PDF SGS closure. The Smagorinsky closure augmented by the solution of the subgrid turbulent kinetic energy (TKE) equation is employed to account for hydrodynamic fluctuations, and the PDF is employed for modeling the

  15. Modelling artificial sea salt emission in large eddy simulations

    PubMed Central

    Maalick, Z.; Korhonen, H.; Kokkola, H.; Kühn, T.; Romakkaniemi, S.

    2014-01-01

    We study the dispersion of sea salt particles from artificially injected sea spray at a cloud-resolving scale. Understanding of how different aerosol processes affect particle dispersion is crucial when designing emission sources for marine cloud brightening. Compared with previous studies, we include for the first time an explicit treatment of aerosol water, which takes into account condensation, evaporation and their effect on ambient temperature. This enables us to capture the negative buoyancy caused by water evaporation from aerosols. Additionally, we use a higher model resolution to capture aerosol loss through coagulation near the source point. We find that, with a seawater flux of 15 kg s−1, the cooling due to evaporation can be as much as 1.4 K, causing a delay in particle dispersion of 10–20 min. This delay enhances particle scavenging by a factor of 1.14 compared with simulations without aerosol water. We further show that both cooling and particle dispersion depend on the model resolution, with a maximum particle scavenging efficiency of 20% within 5 h after emission at maximum resolution of 50 m. Based on these results, we suggest further regional high-resolution studies which model several injection periods over several weeks. PMID:25404679

  16. Modelling artificial sea salt emission in large eddy simulations.

    PubMed

    Maalick, Z; Korhonen, H; Kokkola, H; Kühn, T; Romakkaniemi, S

    2014-12-28

    We study the dispersion of sea salt particles from artificially injected sea spray at a cloud-resolving scale. Understanding of how different aerosol processes affect particle dispersion is crucial when designing emission sources for marine cloud brightening. Compared with previous studies, we include for the first time an explicit treatment of aerosol water, which takes into account condensation, evaporation and their effect on ambient temperature. This enables us to capture the negative buoyancy caused by water evaporation from aerosols. Additionally, we use a higher model resolution to capture aerosol loss through coagulation near the source point. We find that, with a seawater flux of 15 kg s(-1), the cooling due to evaporation can be as much as 1.4 K, causing a delay in particle dispersion of 10-20 min. This delay enhances particle scavenging by a factor of 1.14 compared with simulations without aerosol water. We further show that both cooling and particle dispersion depend on the model resolution, with a maximum particle scavenging efficiency of 20% within 5 h after emission at maximum resolution of 50 m. Based on these results, we suggest further regional high-resolution studies which model several injection periods over several weeks.

  17. Large-eddy simulations of a propelled submarine model

    NASA Astrophysics Data System (ADS)

    Posa, Antonio; Balaras, Elias

    2015-11-01

    The influence of the propeller on the wake as well as the evolution of the turbulent boundary layers over an appended notional submarine geometry (DARPA SUBOFF) is reported. The present approach utilizes a wall-resolved LES, coupled with an immersed boundary formulation, to simulate the flow model scale Reynolds numbers (Re = 1 . 2 e + 06 , based on the free-stream velocity and the length of the body). Cylindrical coordinates are adopted, and the computational grid is composed of 3.5 billion nodes. Our approach has been validated on the appended submarine body in towed conditions (without propeller), by comparisons to wind tunnel experiments in the literature. The comparison with the towed configuration shows profound modifications in the boundary layer over the stern surface, due to flow acceleration, with higher values of turbulent kinetic energy in the inner layer and lower values in the outer layer. This behavior was found tied to a different topology of the coherent structures between propelled and towed cases. The wake is also highly affected, and the momentum deficit displays a non-monotonic evolution downstream. An axial peak of turbulent kinetic energy replaces the bimodal distribution of the stresses in the wake, observed in the towed configuration. Supported by ONR Grant N000141110455, monitored by Dr. Ki-Han Kim.

  18. Neutral Buoyancy Simulator - NB32 - Large Space Structure

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The Hubble Space Telescope (HST) is a cooperative program of the European Space Agency (ESA) and the National Aeronautical and Space Administration (NASA) to operate a long-lived space-based observatory; it was the flagship mission of NASA's Great Observatories program. The HST program began as an astronomical dream in the 1940s. During the 1970s and 1980s, HST was finally designed and built; and it finally became operational in the 1990s. HST was deployed into a low-Earth orbit on April 25, 1990 from the cargo bay of the Space Shuttle Discovery (STS-31). The design of the HST took into consideration its length of service and the necessity of repairs and equipment replacement by making the body modular. In doing so, subsequent shuttle missions could recover the HST, replace faulty or obsolete parts and be re-released. MSFC's Neutral Buoyancy Simulator served as the training facility for shuttle astronauts for Hubble related missions. Shown is astronaut Sharnon Lucid having her life support system being checked prior to entering the NBS to begin training on the space telescope axial scientific instrument changeout.

  19. Large liquid rocket engine transient performance simulation system

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Southwick, R. D.

    1989-01-01

    Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.

  20. Implementation of low communication frequency 3D FFT algorithm for ultra-large-scale micromagnetics simulation

    NASA Astrophysics Data System (ADS)

    Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta

    2016-10-01

    We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.

  1. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    PubMed Central

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-01-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications. PMID:26883390

  2. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    NASA Astrophysics Data System (ADS)

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-02-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications.

  3. Can virtual simulation of breast tangential portals accurately predict lung and heart volumes?

    PubMed

    Cooke, Stacey; Rattray, Greg

    2003-03-01

    A treatment portal or simulator image has traditionally been used to demonstrate the lung and heart coverage of the breast tangential portal. In many cases, these images were acquired as a planning session on the linear accelerator. The patients were also CT scanned to assess the lung/heart volume and to determine the surgical site depth for the electron-boost energy. A study using 50 consecutive patients was performed comparing the digitally reconstructed radiograph (DRR) from the virtual simulation with treatment portal images. Modification to the patient's arm position is required when performing the planning CT scans due to the aperture size of the CT scanner. Virtual simulation was used to assess the potential variation of lung and heart measurements. The average difference in lung volume between the DRR and portal image was less than 2 mm, with a range of 0-5 mm. Arm position did not have a significant impact on field deviation; however, great care was taken to minimize any changes in arm position. The modification of the arm position for CT scanning did not lead to significant variations between the DRRs and portal images. The Advantage Sim software has proven capable of producing good quality DRR images, providing a realistic representation of the lung and heart volume included in the treatment portal.

  4. Photoperiod is associated with hippocampal volume in a large community sample.

    PubMed

    Miller, Megan A; Leckie, Regina L; Donofry, Shannon D; Gianaros, Peter J; Erickson, Kirk I; Manuck, Stephen B; Roecklein, Kathryn A

    2015-04-01

    Although animal research has demonstrated seasonal changes in hippocampal volume, reflecting seasonal neuroplasticity, seasonal differences in human hippocampal volume have yet to be documented. Hippocampal volume has also been linked to depressed mood, a seasonally varying phenotype. Therefore, we hypothesized that seasonal differences in day-length (i.e., photoperiod) would predict differences in hippocampal volume, and that this association would be linked to low mood. Healthy participants aged 30-54 (M=43; SD=7.32) from the University of Pittsburgh Adult Health and Behavior II project (n=404; 53% female) were scanned in a 3T MRI scanner. Hippocampal volumes were determined using an automated segmentation algorithm using FreeSurfer. A mediation model tested whether hippocampal volume mediated the relationship between photoperiod and mood. Secondary analyses included seasonally fluctuating variables (i.e., sleep and physical activity) which have been shown to influence hippocampal volume. Shorter photoperiods were significantly associated with higher BDI scores (R(2)=0.01, β=-0.12, P=0.02) and smaller hippocampal volumes (R(2)=0.40, β=0.08, P=0.04). However, due to the lack of an association between hippocampal volume and Beck Depression Inventory scores in the current sample, the mediation hypothesis was not supported. This study is the first to demonstrate an association between season and hippocampal volume. These data offer preliminary evidence that human hippocampal plasticity could be associated with photoperiod and indicates a need for longitudinal studies.

  5. Annealing as grown large volume CZT single crystals increased spectral resolution

    SciTech Connect

    Dr. Longxia Li

    2008-03-19

    The spectroscopic performance of current large-volume Cadmium 10% Zinc Telluride, Cd{sub 0.9}Zn{sub 0.1}Te, (CZT) detectors is impaired by cumulative effect of tellurium precipitates (secondary phases) presented in CZT single-crystal grown by low-pressure Bridgman techniques(1). This statistical effect may limit the energy resolution of large-volume CZT detectors (typically 2-5% at 662 keV for 12-mm thick devices). The stochastic nature of the interaction prevents the use of any electronic or digital charge correction techniques without a significant reduction in the detector efficiency. This volume constraint hampers the utility of CZT since the detectors are inefficient at detecting photons >1MeV and/or in low fluency situations. During the project, seven runs CZT ingots have been grown, in these ingots the indium dopant concentrations have been changed in the range between 0.5ppm to 6ppm. The I-R mapping imaging method has been employed to study the Te-precipitates. The Teprecipitates in as-grown CZT wafers, and after annealing wafers have been systematically studied by using I-R mapping system (home installed, resolution of 1.5 {micro}m). We employed our I-R standard annealing CZT (Zn=4%) procedure or two-steps annealing into radiation CZT (Zn=10%), we achieved the 'non'-Te precipitates (size < 1 {micro}m) CZT n+-type with resistivity > 10{sup 9-10} {Omega}-cm. We believe that the Te-precipitates are the p-type defects, its reducing number causes the CZT became n+-type, therefore we varied or reduced the indium dapant concentration during the growth and changed the Te-precipitates size and density by using different Cd-temperature and different annealing procedures. We have made the comparisons among Te-precipitates size, density and Indium dopant concentrations, and we found that the CZT with smaller size of Te-precipitates is suitable for radiation uses but non-Te precipitates is impossible to be used in the radiation detectors, because the CZT would became

  6. Radiometric Dating of Large Volume Flank Collapses in The Lesser Antilles Arc.

    NASA Astrophysics Data System (ADS)

    Quidelleur, X.; Samper, A.; Boudon, G.; Le Friant, A.; Komorowski, J.

    2004-12-01

    It is now admitted that flank collapses, probably triggered by magmatic inflation and/or gravitational collapses, is a recurrent process of the evolution of the Lesser Antilles Arc volcanoes. Large magnitude debris avalanche deposits have been identified offshore, in the Grenada basin (Deplus et al., 2001; Le Friant et al., 2001). The widest extensions have been observed off the coast of Dominica and St Lucia, with associated volumes up to 20 km3. Another large-scale event, with marine evidences probably covered by sediments and latter flank collapses, has been inferred onland from morphological evidences and characteristic deposits of the Carbets structure in Martinique. We present radiometric dating of these three major events using the K-Ar Cassignol-Gillot technique performed on selected groundmass. Both volcanic formations preceding flank collapses (remnants of the horseshoe shaped structures or basal lava flows) and following landslides (lava domes) have been dated. In the Qualibou depression of St. Lucia, the former structure has been dated at 1096+-16 ka and the collapse constrained by dome emplacement prior to 97+-2 ka (Petit Piton). In Dominica, several structures have been associated with repetitive flank collapse events inferred from marine data (Le Friant et al., 2002). The Plat-Pays event probably occurred after 96+-2 ka. Inside the inherited depression, Scotts Head, which is interpreted as a proximal pluri-kilometric megabloc from the Soufriere avalanche, has been dated at 14+-1 ka, providing an older bound for this event. In Martinique Island, three different domes within the Carbets structure have been dated at 335+-5 ka. Assuming a rapid magma emplacement following pressure release due to deloading, this constrains the age of this high magnitude event. Finally, these results obtained from three of the most voluminous flank collapses provide constraints to estimate the recurrence of these events, which represent one of the major hazards associated

  7. Radiometric dating of three large volume flank collapses in the Lesser Antilles Arc

    NASA Astrophysics Data System (ADS)

    Samper, A.; Quidelleur, X.; Boudon, G.; Le Friant, A.; Komorowski, J. C.

    2008-10-01

    It is now recognised that flank collapses are a recurrent process in the evolution of the Lesser Antilles Arc volcanoes. Large magnitude debris-avalanche deposits have been identified off the coast of Dominica, Martinique and St. Lucia, with associated volumes up to 20 km 3 [Deplus, C., Le Friant, A., Boudon, G., Komorowski, J.-C., Villemant, B., Harford, C., Ségoufin, J., Cheminée, J.-L., 2001. Submarine evidence for large-scale debris avalanches in the Lesser Antilles Arc. Earth Planet. Sci. Lett., 192: 145-157.]. We present new radiometric dating of three major events using the K-Ar Cassignol-Gillot technique. In the Qualibou depression of St. Lucia, a collapse has been constrained by dome emplacement prior to 95 ± 2 ka. In Dominica, where repetitive flank collapse events have occurred [Le Friant, A., Boudon, G., Komorowski, J.-C., Deplus, C., 2002. L'île de la Dominique, à l'origine des avalanches de débris les plus volumineuses de l'arc des Petites Antilles. C.R. Geoscience, 334: 235-243], the Plat Pays event probably occurred after 96 ± 2 ka. Inside the depression caused by this event, Scotts Head, which is interpreted as a proximal megabloc from the subsequent Soufriere avalanche event has been dated at 14 ± 1 ka, providing an older bound for this event. On Martinique three different domes within the Carbets structure dated at 337 ± 5 ka constrain the age of this high magnitude event. Finally, these results obtained from three of the most voluminous flank collapses provide constraints to estimate the recurrence of these events, which represent one of the major hazards associated with volcanoes of the Lesser Antilles Arc.

  8. Large-eddy simulation of turbulent cavitating flow in a micro channel

    SciTech Connect

    Egerer, Christian P. Hickel, Stefan; Schmidt, Steffen J.; Adams, Nikolaus A.

    2014-08-15

    Large-eddy simulations (LES) of cavitating flow of a Diesel-fuel-like fluid in a generic throttle geometry are presented. Two-phase regions are modeled by a parameter-free thermodynamic equilibrium mixture model, and compressibility of the liquid and the liquid-vapor mixture is taken into account. The Adaptive Local Deconvolution Method (ALDM), adapted for cavitating flows, is employed for discretizing the convective terms of the Navier-Stokes equations for the homogeneous mixture. ALDM is a finite-volume-based implicit LES approach that merges physically motivated turbulence modeling and numerical discretization. Validation of the numerical method is performed for a cavitating turbulent mixing layer. Comparisons with experimental data of the throttle flow at two different operating conditions are presented. The LES with the employed cavitation modeling predicts relevant flow and cavitation features accurately within the uncertainty range of the experiment. The turbulence structure of the flow is further analyzed with an emphasis on the interaction between cavitation and coherent motion, and on the statistically averaged-flow evolution.

  9. High-order Hybridized Discontinuous Galerkin methods for Large-Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Fernandez, Pablo; Nguyen, Ngoc-Cuong; Peraire, Jaime

    2016-11-01

    With the increase in computing power, Large-Eddy Simulation emerges as a promising technique to improve both knowledge of complex flow physics and reliability of flow predictions. Most LES works, however, are limited to simple geometries and low Reynolds numbers due to high computational cost. While most existing LES codes are based on 2nd-order finite volume schemes, the efficient and accurate prediction of complex turbulent flows may require a paradigm shift in computational approach. This drives a growing interest in the development of Discontinuous Galerkin (DG) methods for LES. DG methods allow for high-order, conservative implementations on complex geometries, and offer opportunities for improved sub-grid scale modeling. Also, high-order DG methods are better-suited to exploit modern HPC systems. In the spirit of making them more competitive, researchers have recently developed the hybridized DG methods that result in reduced computational cost and memory footprint. In this talk we present an overview of high-order hybridized DG methods for LES. Numerical accuracy, computational efficiency, and SGS modeling issues are discussed. Numerical results up to Re=460k show rapid grid convergence and excellent agreement with experimental data at moderate computational cost.

  10. Curling probe measurement of a large-volume pulsed plasma with surface magnetic confinement

    NASA Astrophysics Data System (ADS)

    Pandey, A.; Tashiro, H.; Sakakibara, W.; Nakamura, K.; Sugai, H.

    2016-12-01

    A curling probe (CP) based on microwave resonance is applied to the measurement of electron density in a pulsed DC glow discharge under surface magnetic confinement (SMC) provided by a number of permanent magnets on a chamber wall. Owing to the SMC effects, a 1 m scale large-volume plasma is generated by a relatively low voltage (~1 kV) at low pressure (~1 Pa) in various gases (Ar, CH4, and C2H2). Temporal variation of the electron density is measured for pulse frequency f  =  0.5-25 kHz for various discharge-on times (T ON) with a high resolution time (~0.2 µs), using the on-point mode. In general, the electron density starts to increase at time t  =  0 after turn-on of the discharge voltage, reaches peak density at t  =  T ON, and then decreases after turn-off. The peak electron density is observed to increase with the pulse frequency f for constant T ON owing to the residual plasma. This dependence is successfully formulated using a semi-empirical model. The spatio-temporal evolution of the cathode sheath in the pulsed discharge is revealed by a 1 m long movable CP. The measured thickness of the high-voltage cathode fall in a steady state coincides with the value of the so-called Child-Langmuir sheath.

  11. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    PubMed

    Andras, Peter

    2017-01-25

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  12. Development testing of large volume water sprays for warm fog dispersal

    NASA Technical Reports Server (NTRS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.; Beard, K. V.

    1986-01-01

    A new brute-force method of warm fog dispersal is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray induced air flow. Fog droplets are removed by coalescence/rainout. The efficiency of the technique depends upon the drop size spectra in the spray, the height to which the spray can be projected, the efficiency with which fog laden air is processed through the curtain of spray, and the rate at which new fog may be formed due to temperature differences between the air and spray water. Results of a field test program, implemented to develop the data base necessary to assess the proposed method, are presented. Analytical calculations based upon the field test results indicate that this proposed method of warm fog dispersal is feasible. Even more convincingly, the technique was successfully demonstrated in the one natural fog event which occurred during the test program. Energy requirements for this technique are an order of magnitude less than those to operate a thermokinetic system. An important side benefit is the considerable emergency fire extinguishing capability it provides along the runway.

  13. Multi-stage polymer systems for the autonomic regeneration of large damage volumes

    NASA Astrophysics Data System (ADS)

    Santa Cruz, Windy Ann

    Recovery of catastrophic damage requires a robust chemistry capable of addressing the complex challenges encountered by autonomic regeneration. Although self-healing polymers have the potential to increase material lifetimes and safety, these systems have been limited to recovery of internal microcracks and surface damage. Current technologies thereby fail to address the restoration of large, open damage volumes. A regenerative chemistry was developed by incorporating a gel scaffold within liquid healing agents. The healing system undergoes two stages, sol-gel and gel-polymer. Stage 1, rapid formation of a crosslinked gel, creates a synthetic support for the healing agents as they deposit across the damage region. Stage 2 comprises the polymerization of monomer using a room temperature redox initiation system to recover the mechanical properties of the substrate. The two stages are chemically compatible and only react when a specific reaction trigger is introduced -- an acid catalyst for gelation and initiator-promoter for polymerization. Cure kinetics, chemical and mechanical properties can be tuned by employing different monomer systems. The versatile gelation chemistry gels over 20 vinyl monomers to yield both thermoplastic and thermosetting polymers. The healing efficacy of the two-stage system was studied in thin, vascularized epoxy sheets. By splitting the chemistry into two low viscosity fluids, we demonstrated regeneration of gaps up to 9 mm in diameter. The combination of microvascular networks and a new healing chemistry demonstrates an innovative healing system that significantly exceeds the performance of traditional methods.

  14. Detecting Boosted Dark Matter from the Sun with Large Volume Neutrino Detectors

    SciTech Connect

    Berger, Joshua; Cui, Yanou; Zhao, Yue; /Stanford U., ITP /Stanford U., Phys. Dept.

    2015-04-02

    We study novel scenarios where thermal dark matter (DM) can be efficiently captured in the Sun and annihilate into boosted dark matter. In models with semi-annihilating DM, where DM has a non-minimal stabilization symmetry, or in models with a multi-component DM sector, annihilations of DM can give rise to stable dark sector particles with moderate Lorentz boosts. We investigate both of these possibilities, presenting concrete models as proofs of concept. Both scenarios can yield viable thermal relic DM with masses O(1)-O(100) GeV. Taking advantage of the energetic proton recoils that arise when the boosted DM scatters off matter, we propose a detection strategy which uses large volume terrestrial detectors, such as those designed to detect neutrinos or proton decays. In particular, we propose a search for proton tracks pointing towards the Sun. We focus on signals at Cherenkov-radiation-based detectors such as Super-Kamiokande (SK) and its upgrade Hyper-Kamiokande (HK). We find that with spin-dependent scattering as the dominant DM-nucleus interaction at low energies, boosted DM can leave detectable signals at SK or HK, with sensitivity comparable to DM direct detection experiments while being consistent with current constraints. Our study provides a new search path for DM sectors with non-minimal structure.

  15. Detecting boosted dark matter from the Sun with large volume neutrino detectors

    SciTech Connect

    Berger, Joshua; Cui, Yanou; Zhao, Yue E-mail: ycui@perimeterinstitute.ca

    2015-02-01

    We study novel scenarios where thermal dark matter (DM) can be efficiently captured in the Sun and annihilate into boosted dark matter. In models with semi-annihilating DM, where DM has a non-minimal stabilization symmetry, or in models with a multi-component DM sector, annihilations of DM can give rise to stable dark sector particles with moderate Lorentz boosts. We investigate both of these possibilities, presenting concrete models as proofs of concept. Both scenarios can yield viable thermal relic DM with masses O(1)-O(100) GeV. Taking advantage of the energetic proton recoils that arise when the boosted DM scatters off matter, we propose a detection strategy which uses large volume terrestrial detectors, such as those designed to detect neutrinos or proton decays. In particular, we propose a search for proton tracks pointing towards the Sun. We focus on signals at Cherenkov-radiation-based detectors such as Super-Kamiokande (SK) and its upgrade Hyper-Kamiokande (HK). We find that with spin-dependent scattering as the dominant DM-nucleus interaction at low energies, boosted DM can leave detectable signals at SK or HK, with sensitivity comparable to DM direct detection experiments while being consistent with current constraints. Our study provides a new search path for DM sectors with non-minimal structure.

  16. Configuration Analysis of the ERS Points in Large-Volume Metrology System.

    PubMed

    Jin, Zhangjun; Yu, Cijun; Li, Jiangxiong; Ke, Yinglin

    2015-09-22

    In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers' coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system.

  17. Curling probe measurement of large-volume pulsed plasma confined by surface magnetic field

    NASA Astrophysics Data System (ADS)

    Pandey, Anil; Sakakibara, Wataru; Matsuoka, Hiroyuki; Nakamura, Keiji; Sugai, Hideo; Chubu University Team; DOWA Thermotech Collaboration

    2015-09-01

    Curling probe (CP) has recently been developed which enables the local electron density measurement even in plasma for non-conducting film CVD. The electron density is obtained from a shift of resonance frequency of spiral antenna in discharge ON and OFF monitored by a network analyzer (NWA). In case of a pulsed glow discharge, synchronization of discharge pulse with frequency sweep of NWA must be established. In this paper, we report time and space-resolved CP measurement of electron density in a large volume plasma (80 cm diameter, 110 cm length) confined by surface magnetic field (multipole cusp field ~0.03 T). For plasma-aided modification of metal surface, the plasma is produced by 1 kV glow discharge at pulse frequency of 0.3 - 25 kHz with various duty ratio in gas (Ar, N2, C2H2) at pressure ~ 1 Pa. A radially movable CP revealed a remarkable effect of surface magnetic confinement: detach of plasma from the vessel wall and a fairly uniform plasma in the central region. In afterglow phase, the electron density was observed to decrease much faster in C2H2 discharge than in Ar discharge.

  18. A new large-volume metal reference standard for radioactive waste management

    PubMed Central

    Tzika, F.; Hult, M.; Stroh, H.; Marissens, G.; Arnold, D.; Burda, O.; Kovář, P.; Suran, J.; Listkowska, A.; Tyminski, Z.

    2016-01-01

    A new large-volume metal reference standard has been developed. The intended use is for calibration of free-release radioactivity measurement systems and is made up of cast iron tubes placed inside a box of the size of a Euro-pallet (80 × 120 cm). The tubes contain certified activity concentrations of 60Co (0.290±0.006 Bq g−1) and 110mAg (3.05±0.09 Bq g−1) (reference date: 30 September 2013). They were produced using centrifugal casting from a smelt into which 60Co was first added and then one piece of neutron irradiated silver wire was progressively diluted. The iron castings were machined to the desirable dimensions. The final material consists of 12 iron tubes of 20 cm outer diameter, 17.6 cm inner diameter, 40 cm length/height and 245.9 kg total mass. This paper describes the reference standard and the process of determining the reference activity values. PMID:25977349

  19. A new large-volume metal reference standard for radioactive waste management.

    PubMed

    Tzika, F; Hult, M; Stroh, H; Marissens, G; Arnold, D; Burda, O; Kovář, P; Suran, J; Listkowska, A; Tyminski, Z

    2016-03-01

    A new large-volume metal reference standard has been developed. The intended use is for calibration of free-release radioactivity measurement systems and is made up of cast iron tubes placed inside a box of the size of a Euro-pallet (80 × 120 cm). The tubes contain certified activity concentrations of (60)Co (0.290 ± 0.006 Bq g(-1)) and (110m)Ag (3.05 ± 0.09 Bq g(-1)) (reference date: 30 September 2013). They were produced using centrifugal casting from a smelt into which (60)Co was first added and then one piece of neutron irradiated silver wire was progressively diluted. The iron castings were machined to the desirable dimensions. The final material consists of 12 iron tubes of 20 cm outer diameter, 17.6 cm inner diameter, 40 cm length/height and 245.9 kg total mass. This paper describes the reference standard and the process of determining the reference activity values.

  20. Twinning in vapour-grown, large volume Cd1-xZnxTe crystals

    NASA Astrophysics Data System (ADS)

    Tanner, B. K.; Mullins, J. T.; Pym, A. T. G.; Maneuski, D.

    2016-08-01

    The onset of twinning from (2 bar 1 bar 1 bar) to (1 bar 3 bar 3 bar) in large volume Cd1-xZnxTe crystals, grown by vapour transport on (2 bar 1 bar 1 bar) , often referred to as (211)B, oriented GaAs seeds, has been investigated using X-ray diffraction imaging (X-ray topography). Twinning is not associated with strains at the GaAs/CdTe interface as the initial growth was always in (2 bar 1 bar 1 bar) orientation. Nor is twinning related to lattice strains associated with injection of Zn subsequent to initial nucleation and growth of pure CdTe as in both cases twinning occurred after growth of several mm length of Cd1-xZnxTe. While in both cases examined, there was a region of disturbed growth prior to the twinning transition, in neither crystal does this strain appear to have nucleated the twinning process. In both cases, un-twinned material remained after twinning was observed, the scale of the resulting twin boundaries being sub-micron. Simultaneous twinning across the whole sample surface was observed in one sample, whereas in the other, twinning was nucleated at different points and times in the growth.

  1. A uniform laminar air plasma plume with large volume excited by an alternating current voltage

    NASA Astrophysics Data System (ADS)

    Li, Xuechen; Bao, Wenting; Chu, Jingdi; Zhang, Panpan; Jia, Pengying

    2015-12-01

    Using a plasma jet composed of two needle electrodes, a laminar plasma plume with large volume is generated in air through an alternating current voltage excitation. Based on high-speed photography, a train of filaments is observed to propagate periodically away from their birth place along the gas flow. The laminar plume is in fact a temporal superposition of the arched filament train. The filament consists of a negative glow near the real time cathode, a positive column near the real time anode, and a Faraday dark space between them. It has been found that the propagation velocity of the filament increases with increasing the gas flow rate. Furthermore, the filament lifetime tends to follow a normal distribution (Gaussian distribution). The most probable lifetime decreases with increasing the gas flow rate or decreasing the averaged peak voltage. Results also indicate that the real time peak current decreases and the real time peak voltage increases with the propagation of the filament along the gas flow. The voltage-current curve indicates that, in every discharge cycle, the filament evolves from a Townsend discharge to a glow one and then the discharge quenches. Characteristic regions including a negative glow, a Faraday dark space, and a positive column can be discerned from the discharge filament. Furthermore, the plasma parameters such as the electron density, the vibrational temperature and the gas temperature are investigated based on the optical spectrum emitted from the laminar plume.

  2. Peak distortions arising from large-volume injections in supercritical fluid chromatography.

    PubMed

    Dai, Yun; Li, Geng; Rajendran, Arvind

    2015-05-01

    Preparative separations in supercritical fluid chromatography (SFC) involve the injection of large volumes of the solute. In SFC, the mobile phase is typically high pressure CO2+modifier and the solute to be injected is usually dissolved in the modifier. Two-types of injection methods, modifier-stream and mixed-stream, are common in commercial preparative SFC systems. In modifier-stream injection, the injection is made in the modifier stream which is later mixed with the CO2 stream, while in the mixed-stream injection, the injection is made in a mixed CO2+modifier stream. In this work a systematic experimental and modelling study of the two techniques is reported using single-enantiomers of flurbiprofen on Chiralpak AD-H with CO2+methanol as the mobile phase. While modifier-stream injection shows non-distorted peaks, mixed-stream injection results in severe peak-distortion. By comparing the modelling and experimental results, it is shown that the modifier "plug" introduced in the mixed-stream injection is the primary cause of the peak distortions. The experimental results also point to the possible existence of viscous fingering which contributes to further peak distortion.

  3. Development of a large mosaic volume phase holographic (VPH) grating for APOGEE

    NASA Astrophysics Data System (ADS)

    Arns, James; Wilson, John C.; Skrutskie, Mike; Smee, Steve; Barkhouser, Robert; Eisenstein, Daniel; Gunn, Jim; Hearty, Fred; Harding, Al; Maseman, Paul; Holtzman, Jon; Schiavon, Ricardo; Gillespie, Bruce; Majewski, Steven

    2010-07-01

    Volume phase holographic (VPH) gratings are increasingly being used as diffractive elements in astronomical instruments due to their potential for very high peak diffraction efficiencies and the possibility of a compact instrument design when the gratings are used in transmission. Historically, VPH grating (VPHG) sizes have been limited by the size of manufacturer's holographic recording optics. We report on the design, specification and fabrication of a large, 290 mm × 475 mm elliptically-shaped, mosaic VPHG for the Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectrograph. This high-resolution near-infrared multi-object spectrograph is in construction for the Sloan Digital Sky Survey III (SDSS III). The 1008.6 lines/mm VPHG was designed for optimized performance over a wavelength range from 1.5 to 1.7 μm. A step-and-repeat exposure method was chosen to fabricate a three-segment mosaic on a 305 mm × 508 mm monolithic fused-silica substrate. Specification considerations imposed on the VPHG to assure the mosaic construction will satisfy the end use requirements are discussed. Production issues and test results of the mosaic VPHG are discussed.

  4. Exploring the limiting timing resolution for large volume CZT detectors with waveform analysis

    PubMed Central

    Meng, L.J.; He, Z.

    2016-01-01

    This paper presents a study for exploring the limiting timing resolution that can be achieved with a large volume 3-D position sensitive CZT detector. The interaction timing information was obtained by fitting the measured cathode waveforms to pre-defined waveform models. We compared the results from using several different waveform models. Timing resolutions, of ~9.5 ns for 511 keV full-energy events and ~11.6 ns for all detected events with energy deposition above 250 keV, were achieved with a detailed modeling of the cathode waveform as a function of interaction location and energy deposition. This detailed modeling also allowed us to derive a theoretical lower bound for the error on estimated interaction timing. Both experimental results and theoretical predications matched well, which indicated that the best timing resolution achievable in the 1 cm3 CZT detector tested is ~10 ns. It is also showed that the correlation between sampled amplitudes in cathode waveforms is an important limiting factor for the achievable timing resolution. PMID:28260808

  5. Plasma response to electron energy filter in large volume plasma device

    SciTech Connect

    Sanyasi, A. K.; Awasthi, L. M.; Mattoo, S. K.; Srivastava, P. K.; Singh, S. K.; Singh, R.; Kaw, P. K.

    2013-12-15

    An electron energy filter (EEF) is embedded in the Large Volume Plasma Device plasma for carrying out studies on excitation of plasma turbulence by a gradient in electron temperature (ETG) described in the paper of Mattoo et al. [S. K. Mattoo et al., Phys. Rev. Lett. 108, 255007 (2012)]. In this paper, we report results on the response of the plasma to the EEF. It is shown that inhomogeneity in the magnetic field of the EEF switches on several physical phenomena resulting in plasma regions with different characteristics, including a plasma region free from energetic electrons, suitable for the study of ETG turbulence. Specifically, we report that localized structures of plasma density, potential, electron temperature, and plasma turbulence are excited in the EEF plasma. It is shown that structures of electron temperature and potential are created due to energy dependence of the electron transport in the filter region. On the other hand, although structure of plasma density has origin in the particle transport but two distinct steps of the density structure emerge from dominance of collisionality in the source-EEF region and of the Bohm diffusion in the EEF-target region. It is argued and experimental evidence is provided for existence of drift like flute Rayleigh-Taylor in the EEF plasma.

  6. On `light' fermions and proton stability in `big divisor' D3/ D7 large volume compactifications

    NASA Astrophysics Data System (ADS)

    Misra, Aalok; Shukla, Pramod

    2011-06-01

    Building on our earlier work (Misra and Shukla, Nucl. Phys. B 827:112, 2010; Phys. Lett. B 685:347-352, 2010), we show the possibility of generating "light" fermion mass scales of MeV-GeV range (possibly related to the first two generations of quarks/leptons) as well as eV (possibly related to first two generations of neutrinos) in type IIB string theory compactified on Swiss-Cheese orientifolds in the presence of a mobile space-time filling D3-brane restricted to (in principle) stacks of fluxed D7-branes wrapping the "big" divisor Σ B . This part of the paper is an expanded version of the latter half of Sect. 3 of a published short invited review (Misra, Mod. Phys. Lett. A 26:1, 2011) written by one of the authors [AM]. Further, we also show that there are no SUSY GUT-type dimension-five operators corresponding to proton decay, and we estimate the proton lifetime from a SUSY GUT-type four-fermion dimension-six operator to be 1061 years. Based on GLSM calculations in (Misra and Shukla, Nucl. Phys. B 827:112, 2010) for obtaining the geometric Kähler potential for the "big divisor," using further the Donaldson's algorithm, we also briefly discuss in the first of the two appendices the metric for the Swiss-Cheese Calabi-Yau used, which we obtain and which becomes Ricci flat in the large-volume limit.

  7. Improvements in Monte Carlo Simulation of Large Electron Fields

    SciTech Connect

    Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto; /SLAC

    2007-11-28

    Two Monte Carlo systems, EGSnrc and Geant4, were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results with measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration was matched to 0.1 cm. Depth dose curves generally agreed to 2% in the build-up region, although there is an additional 2-3% experimental uncertainty in this region. Dose profiles matched to 2% at the depth of maximum dose in the central region of the beam, out to the point of the profile where the dose begins to fall rapidly. A 3%/3mm match was obtained outside the central region except for the 6 MeV beam, where dose differences reached 5%. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. The different systems required different source energies, incident beam angles, thicknesses of the exit window and primary foils, and distance between the primary and secondary foil. These results underscore the requirement for an experimental benchmark of electron scatter for beam energies and foils relevant to radiotherapy.

  8. Cut-cell method based large-eddy simulation of tip-leakage flow

    NASA Astrophysics Data System (ADS)

    Pogorelov, Alexej; Meinke, Matthias; Schröder, Wolfgang

    2015-07-01

    The turbulent low Mach number flow through an axial fan at a Reynolds number of 9.36 × 105 based on the outer casing diameter is investigated by large-eddy simulation. A finite-volume flow solver in an unstructured hierarchical Cartesian setup for the compressible Navier-Stokes equations is used. To account for sharp edges, a fully conservative cut-cell approach is applied. A newly developed rotational periodic boundary condition for Cartesian meshes is introduced such that the simulations are performed just for a 72° segment, i.e., the flow field over one out of five axial blades is resolved. The focus of this numerical analysis is on the development of the vortical flow structures in the tip-gap region. A detailed grid convergence study is performed on four computational grids with 50 × 106, 250 × 106, 1 × 109, and 1.6 × 109 cells. Results of the instantaneous and the mean fan flow field are thoroughly analyzed based on the solution with 1 × 109 cells. High levels of turbulent kinetic energy and pressure fluctuations are generated by a tip-gap vortex upstream of the blade, the separating vortices inside the tip gap, and a counter-rotating vortex on the outer casing wall. An intermittent interaction of the turbulent wake, generated by the tip-gap vortex, with the downstream blade, leads to a cyclic transition with high pressure fluctuations on the suction side of the blade and a decay of the tip-gap vortex. The disturbance of the tip-gap vortex results in an unsteady behavior of the turbulent wake causing the intermittent interaction. For this interaction and the cyclic transition, two dominant frequencies are identified which perfectly match with the characteristic frequencies in the experimental sound power level and therefore explain their physical origin.

  9. Modeling Persistent Contrails in a Large Eddy Simulation and a Global Climate Model

    NASA Astrophysics Data System (ADS)

    Naiman, A. D.; Lele, S. K.; Wilkerson, J. T.; Jacobson, M. Z.

    2009-12-01

    Two models of aircraft condensation trail (contrail) evolution have been developed: a high resolution, three-dimensional Large Eddy Simulation (LES) and a simple, low-cost Subgrid Contrail Model (SCM). The LES model was used to simulate contrail development from one second to twenty minutes after emission by the passing aircraft. The LES solves the incompressible Navier-Stokes equations with a Boussinesq approximation for buoyancy forces on an unstructured periodic grid. The numerical scheme uses a second-order finite volume spatial discretization and an implicit fractional-step method for time advancement. Lagrangian contrail particles grow according to a microphysical model of ice deposition and sublimation. The simulation is initialized with the wake of a commercial jet superimposed on a decaying turbulence field. The ambient atmosphere is stable and has a supersaturated relative humidity with respect to ice. Grid resolution is adjusted during the simulation, allowing higher resolution of flow structures than previous studies. We present results of a parametric study in which ambient turbulence levels, vertical wind shear, and aircraft type were varied. We find that higher levels of turbulence and shear promote mixing of aircraft exhaust with supersaturated ambient air, resulting in faster growth of ice and wider dispersion of the exhaust plume. The SCM was developed as a parameterization of contrail dynamics intended for use within a global model that examines the effect of commercial aviation on climate. The SCM provides an analytic solution to the changes in size and shape of a contrail cross-section over time due to global model grid-scale vertical wind shear and turbulence parameters. The model was derived from the physical equations of motion of a plume in a sheared, turbulent environment. Approximations based on physical reasoning and contrail observations allowed these equations to be reduced to simple ordinary differential equations in time with exact

  10. Volume-dependent collection of peripheral blood progenitor cells during large-volume leukapheresis for patients with solid tumours and haematological malignancies.

    PubMed

    Cassens, U; Ostkamp-Ostermann, P; van der Werf, N; Garritsen, H; Ostermann, H; Sibrowski, W

    1999-12-01

    We investigated the efficacy of peripheral blood progenitor cell (PBPC) collection during large-volume leukapheresis (LVL) in patients with solid tumours and haematological malignancies (n = 18). The time- and volume-dependent harvest of leucocytes (WBC), mononuclear cells (MNC), CD34+ cells and colony-forming cells (CFU-GM) during LVL was analysed in six sequentially filled collection bags processing four times the patient's blood volumes. The amounts of leucocytes (WBC) and the purity of mononuclear cells (MNC%) did not show any significant changes during LVL. The percentage of CD34+ cells remained constant for the first three bags but consecutively decreased from initially 1.71% CD34+ cells in the beginning of LVL to finally 1.34% CD34+ cells (P = 0.02). The mean numbers of colony-forming cells (CFU-GM) decreased from 74 microL-1 to 59 microL-1 during LVL (P = 0.16). Furthermore, the comparison of volume-dependent PBPC collection for patients with high, medium and low total yields of CD34+ cells showed similar kinetics on different levels for the three groups. We concluded that - relative to the initial total amount of PBPC harvested - comparable numbers of progenitor cells can be collected during all stages of LVL with a slight decreasing trend processing four times the patient's blood volumes.

  11. The development and use of large-motion simulator systems in aeronautical research and development

    NASA Technical Reports Server (NTRS)

    Dusterberry, J. C.; White, M. D.

    1979-01-01

    The paper examines the evolution of manned aircraft simulators with large-motion systems and provides a brief description of important design details along with physical descriptions of a number of systems. Attention is given to the use of large translational motions in providing the simulator pilot with a close approximation of the cues of aircraft flight; examples are cited comparing pilot reactions to simulators with and without motion. How these simulators have been used in programs that effectively influenced aircraft design and operating problems is discussed.

  12. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The

  13. Influence of bone volume fraction and architecture on computed large-deformation failure mechanisms in human trabecular bone.

    PubMed

    Bevill, Grant; Eswaran, Senthil K; Gupta, Atul; Papadopoulos, Panayiotis; Keaveny, Tony M

    2006-12-01

    Large-deformation bending and buckling have long been proposed as failure mechanisms by which the strength of trabecular bone can be affected disproportionately to changes in bone density, and thus may represent an important aspect of bone quality. We sought here to quantify the contribution of large-deformation failure mechanisms on strength, to determine the dependence of these effects on bone volume fraction and architecture, and to confirm that the inclusion of large-deformation effects in high-resolution finite element models improves predictions of strength versus experiment. Micro-CT-based finite element models having uniform hard tissue material properties were created from 54 cores of human trabecular bone taken from four anatomic sites (age = 70+/-11; 24 male, 27 female donors), which were subsequently biomechanically tested to failure. Strength predictions were made from the models first including, then excluding, large-deformation failure mechanisms, both for compressive and tensile load cases. As expected, strength predictions versus experimental data for the large-deformation finite element models were significantly improved (p < 0.001) relative to the small deformation models in both tension and compression. Below a volume fraction of about 0.20, large-deformation failure mechanisms decreased trabecular strength from 5-80% for compressive loading, while effects were negligible above this volume fraction. Step-wise nonlinear multiple regression revealed that structure model index (SMI) and volume fraction (BV/TV) were significant predictors of these reductions in strength (R2 = 0.83, p < 0.03). Even so, some low-density specimens having nearly identical volume fraction and SMI exhibited up to fivefold differences in strength reduction. We conclude that within very low-density bone, the potentially important biomechanical effect of large-deformation failure mechanisms on trabecular bone strength is highly heterogeneous and is not well explained by

  14. Large-eddy simulation of circular cylinder flow at subcritical Reynolds number: Turbulent wake and sound radiation

    NASA Astrophysics Data System (ADS)

    Guo, Li; Zhang, Xing; He, Guowei

    2016-02-01

    The flows past a circular cylinder at Reynolds number 3900 are simulated using large-eddy simulation (LES) and the far-field sound is calculated from the LES results. A low dissipation energy-conserving finite volume scheme is used to discretize the incompressible Navier-Stokes equations. The dynamic global coefficient version of the Vreman's subgrid scale (SGS) model is used to compute the sub-grid stresses. Curle's integral of Lighthill's acoustic analogy is used to extract the sound radiated from the cylinder. The profiles of mean velocity and turbulent fluctuations obtained are consistent with the previous experimental and computational results. The sound radiation at far field exhibits the characteristic of a dipole and directivity. The sound spectra display the -5/3 power law. It is shown that Vreman's SGS model in company with dynamic procedure is suitable for LES of turbulence generated noise.

  15. Cellular automata coupled with steady-state nutrient solution permit simulation of large-scale growth of tumours.

    PubMed

    Shrestha, Sachin Man Bajimaya; Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2013-04-01

    We model complete growth of an avascular tumour by employing cellular automata for the growth of cells and steady-state equation to solve for nutrient concentrations. Our modelling and computer simulation results show that, in the case of a brain tumour, oxygen distribution in the tumour volume may be sufficiently described by a time-independent steady-state equation without losing the characteristics of a time-dependent diffusion equation. This makes the solution of oxygen concentration in the tumour volume computationally more efficient, thus enabling simulation of tumour growth on a large scale. We solve this steady-state equation using a central difference method. We take into account the composition of cells and intercellular adhesion in addition to processes involved in cell cycle--proliferation, quiescence, apoptosis, and necrosis--in the tumour model. More importantly, we consider cell mutation that gives rise to different phenotypes and therefore a tumour with heterogeneous population of cells. A new phenotype is probabilistically chosen and has the ability to survive at lower levels of nutrient concentration and reproduce faster. We show that heterogeneity of cells that compose a tumour leads to its irregular growth and that avascular growth is not supported for tumours of diameter above 18 mm. We compare results from our growth simulation with existing experimental data on Ehrlich ascites carcinoma and tumour spheroid cultures and show that our results are in good agreement with the experimental findings.

  16. Patient-specific coronary artery blood flow simulation using myocardial volume partitioning

    NASA Astrophysics Data System (ADS)

    Kim, Kyung Hwan; Kang, Dongwoo; Kang, Nahyup; Kim, Ji-Yeon; Lee, Hyong-Euk; Kim, James D. K.

    2013-03-01

    Using computational simulation, we can analyze cardiovascular disease in non-invasive and quantitative manners. More specifically, computational modeling and simulation technology has enabled us to analyze functional aspect such as blood flow, as well as anatomical aspect such as stenosis, from medical images without invasive measurements. Note that the simplest ways to perform blood flow simulation is to apply patient-specific coronary anatomy with other average-valued properties; in this case, however, such conditions cannot fully reflect accurate physiological properties of patients. To resolve this limitation, we present a new patient-specific coronary blood flow simulation method by myocardial volume partitioning considering artery/myocardium structural correspondence. We focus on that blood supply is closely related to the mass of each myocardial segment corresponding to the artery. Therefore, we applied this concept for setting-up simulation conditions in the way to consider many patient-specific features as possible from medical image: First, we segmented coronary arteries and myocardium separately from cardiac CT; then the myocardium is partitioned into multiple regions based on coronary vasculature. The myocardial mass and required blood mass for each artery are estimated by converting myocardial volume fraction. Finally, the required blood mass is used as boundary conditions for each artery outlet, with given average aortic blood flow rate and pressure. To show effectiveness of the proposed method, fractional flow reserve (FFR) by simulation using CT image has been compared with invasive FFR measurement of real patient data, and as a result, 77% of accuracy has been obtained.

  17. Pyrometry in the Multianvil Press: New approach for temperature measurement in large volume press experiments

    NASA Astrophysics Data System (ADS)

    Sanehira, T.; Wang, Y.; Prakapenka, V.; Rivers, M. L.

    2008-12-01

    Temperature measurement in large volume press experiments has been based on thermocouple emf, which has well known problems: unknown pressure dependence of emf [e.g., 1], chemical reaction between thermocouple and other materials, deformation related texture development in the thermocouple wires [2], and so on. Thus, different techniques to measure temperatures in large volume press experiments other than thermocouples are required to measure accurate temperatures under high pressures. Here we report a new development using pyrometry in the multianvil press, where temperatures are derived on the basis of spectral radiometry. Several high pressure runs were conducted using the 1000 ton press with a DIA module installed at 13 ID-D GSECARS beamline at Advanced Photon Source (APS) [3]. The cubic pressure medium, 14 mm edge length, was made of soft-fired pyrophyllite with a graphite furnace. A moissanite (SiC) single crystal was built inside the pressure medium as a window for the thermal emission signal to go through. An MgO disk with 1.0 mm thickness was inserted in a gap between the top of the SiC crystal and thermocouple hot junction. The bottom of the window crystal was in direct contact with the tip of the anvil, which had a 1.5 mm diameter hole drilled all the way through the anvil axis. An optical fiber was inserted in this hole and the open end of fiber was in contact with the SiC crystal. Thermal spectral radiance from the inner cell assembly was obtained via the fiber and recorded by an Ocean Optics HP2000 spectrometer. The system response of spectrometer was calibrated by a tungsten ribbon ramp (OL550S, Optronic Laboratories, Inc.) with standard of spectral radiance. The cell assembly was compressed up to target value of 15 tons and then temperature was increased up to 1573 K. Radiation spectra were mainly obtained above 873 K and typical integration time was 1 ms or 10 ms. Data collection was done in the process of increase and decrease of temperature. In

  18. Hepatic Arterial Embolization and Chemoembolization in the Management of Patients with Large-Volume Liver Metastases

    SciTech Connect

    Kamat, Paresh P.; Gupta, Sanjay Ensor, Joe E.; Murthy, Ravi; Ahrar, Kamran; Madoff, David C.; Wallace, Michael J.; Hicks, Marshall E.

    2008-03-15

    The purpose of this study was to assess the role of hepatic arterial embolization (HAE) and chemoembolization (HACE) in patients with large-volume liver metastases. Patients with metastatic neuroendocrine tumors, melanomas, or gastrointestinal stromal tumors (GISTs) with >75% liver involvement who underwent HAE or HACE were included in the study. Radiologic response, progression-free survival (PFS), overall survival (OS), and postprocedure complications were assessed. Sixty patients underwent 123 treatment sessions. Of the 48 patients for whom follow-up imaging was available, partial response was seen in 12 (25%) patients, minimal response in 6 (12%), stable disease in 22 (46%), and progressive disease in 8 (17%). Median OS and PFS were 9.3 and 4.9 months, respectively. Treatment resulted in radiologic response or disease stabilization in 82% and symptomatic response in 65% of patients with neuroendocrine tumors. Patients with neuroendocrine tumors had higher response rates (44% vs. 27% and 0%; p = 0.31) and longer PFS (9.2 vs. 2.0 and 2.3 months; p < 0.0001) and OS (17.9 vs. 2.4 and 2.3 months; p < 0.0001) compared to patients with melanomas and GISTs. Major complications occurred in 21 patients after 23 (19%) of the 123 sessions. Nine of the 12 patients who developed major complications resulting in death had additional risk factors-carcinoid heart disease, sepsis, rapidly worsening performance status, or anasarca. In conclusion, in patients with neuroendocrine tumors with >75% liver involvement, HAE/HACE resulted in symptom palliation and radiologic response or disease stabilization in the majority of patients. Patients with hepatic metastases from melanomas and GISTs, however, did not show any appreciable benefit from this procedure. Patients with massive liver tumor burden, who have additional risk factors, should not be subjected to HAE/HACE because of the high risk of procedure-related mortality.

  19. Accelerated large volume irradiation with dynamic Jaw/Dynamic Couch Helical Tomotherapy

    PubMed Central

    2012-01-01

    Background Helical Tomotherapy (HT) has unique capacities for the radiotherapy of large and complicated target volumes. Next generation Dynamic Jaw/Dynamic Couch HT delivery promises faster treatments and reduced exposure of organs at risk due to a reduced dose penumbra. Methods Three challenging clinical situations were chosen for comparison between Regular HT delivery with a field width of 2.5 cm (Reg 2.5) and 5.0 cm (Reg 5.0) and DJDC delivery with a maximum field width of 5.0 cm (DJDC 5.0): Hemithoracic Irradiation, Whole Abdominal Irradiation (WAI) and Total Marrow Irradiation (TMI). For each setting, five CT data sets were chosen, and target coverage, conformity, integral dose, dose exposure of organs at risk (OAR) and treatment time were calculated. Results Both Reg 5.0 and DJDC 5.0 achieved a substantial reduction in treatment time while maintaining similar dose coverage. Treatment time could be reduced from 10:57 min to 3:42 min / 5:10 min (Reg 5.0 / DJDC 5.0) for Hemithoracic Irradiation, from 18:03 min to 8:02 min / 8:03 min for WAI and to 18:25 min / 18:03 min for TMI. In Hemithoracic Irradiation, OAR exposure was identical in all modalities. For WAI, Reg 2.5 resulted in lower exposure of liver and bone. DJDC plans showed a small but significant increase of ∼ 1 Gy to the kidneys, the parotid glans and the thyroid gland. While Reg 5.0 and DJDC were identical in terms of OAR exposure, integral dose was substantially lower with DJDC, caused by a smaller dose penumbra. Conclusions Although not clinically available yet, next generation DJDC HT technique is efficient in improving the treatment time while maintaining comparable plan quality. PMID:23146914

  20. Large-volume leukapheresis for peripheral blood stem cell collection in patients with hematologic malignancies.

    PubMed

    Malachowski, M E; Comenzo, R L; Hillyer, C D; Tiegerman, K O; Berkman, E M

    1992-10-01

    Large-volume leukapheresis (LVL, 15-35 L) was performed in two groups of patients (n = 10) with hematologic malignancies to obtain peripheral blood stem cells for bone marrow rescue following high-dose chemotherapy. The target cell count was 7 x 10(8) mononuclear cells (MNCs = lymphocytes and monocytes) per kg of body weight. Group A patients (n = 4) were studied on Day 1 of LVL, and components were collected from them as four sequential samples. Total MNCs collected averaged 1.29 x 10(10), total colony-forming-units granulocyte-macrophage (CFU-GM) averaged 12.1 x 10(6), and a 1.8-fold mobilization of CFU-GM was observed (p < 0.05, Sample 1 vs. Sample 4). Group B patients (n = 6) were studied throughout the three consecutive planned days of 5-hour LVL. An average of three LVL procedures per patient was performed (range, 1.25-4), and an average of 27 L (range, 24-33) of blood per LVL was processed. The blood:ACD-A ratio was 24:1 with 3000 units of heparin per 500 mL of ACD-A; heparin was also added to the collection bags. The component had an average hematocrit (Hct) of 0.02 and MNC content of 93 percent. The patients' pre-LVL and post-LVL average Hct varied significantly (before Day 1, 0.36 +/- 0.08; after Day 3, 0.28 +/- 0.06; p < 0.05). Platelet counts also decreased, with post-Day 3 counts averaging 19 percent of the average pre-Day 1 counts (p < 0.05). A decrease in the average MNC count after LVL was significant on Day 1 only (p < 0.05).(ABSTRACT TRUNCATED AT 250 WORDS)