Science.gov

Sample records for large volume simulations

  1. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  2. Determination of the large scale volume weighted halo velocity bias in simulations

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-06-01

    A profound assumption in peculiar velocity cosmology is bv=1 at sufficiently large scales, where bv is the volume-weighted halo(galaxy) velocity bias with respect to the matter velocity field. However, this fundamental assumption has not been robustly verified in numerical simulations. Furthermore, it is challenged by structure formation theory (Bardeen, Bond, Kaiser and Szalay, Astrophys. J. 304, 15 (1986); Desjacques and Sheth, Phys. Rev D 81, 023526 (2010), which predicts the existence of velocity bias (at least for proto-halos) due to the fact that halos reside in special regions (local density peaks). The major obstacle to measuring the volume-weighted velocity from N-body simulations is an unphysical sampling artifact. It is entangled in the measured velocity statistics and becomes significant for sparse populations. With recently improved understanding of the sampling artifact (Zhang, Zheng and Jing, 2015, PRD; Zheng, Zhang and Jing, 2015, PRD), for the first time we are able to appropriately correct this sampling artifact and then robustly measure the volume-weighted halo velocity bias. (1) We verify bv=1 within 2% model uncertainty at k ≲0.1 h /Mpc and z =0 - 2 for halos of mass ˜1012- 1013h-1M⊙ and, therefore, consolidate a foundation for the peculiar velocity cosmology. (2) We also find statistically significant signs of bv≠1 at k ≳0.1 h /Mpc . Unfortunately, whether this is real or caused by a residual sampling artifact requires further investigation. Nevertheless, cosmology based on the k ≳0.1 h /Mpc velocity data should be careful with this potential velocity bias.

  3. A parallel finite volume algorithm for large-eddy simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Bui, Trong Tri

    1998-11-01

    A parallel unstructured finite volume algorithm is developed for large-eddy simulation of compressible turbulent flows. Major components of the algorithm include piecewise linear least-square reconstruction of the unknown variables, trilinear finite element interpolation for the spatial coordinates, Roe flux difference splitting, and second-order MacCormack explicit time marching. The computer code is designed from the start to take full advantage of the additional computational capability provided by the current parallel computer systems. Parallel implementation is done using the message passing programming model and message passing libraries such as the Parallel Virtual Machine (PVM) and Message Passing Interface (MPI). The development of the numerical algorithm is presented in detail. The parallel strategy and issues regarding the implementation of a flow simulation code on the current generation of parallel machines are discussed. The results from parallel performance studies show that the algorithm is well suited for parallel computer systems that use the message passing programming model. Nearly perfect parallel speedup is obtained on MPP systems such as the Cray T3D and IBM SP2. Performance comparison with the older supercomputer systems such as the Cray YMP show that the simulations done on the parallel systems are approximately 10 to 30 times faster. The results of the accuracy and performance studies for the current algorithm are reported. To validate the flow simulation code, a number of Euler and Navier-Stokes simulations are done for internal duct flows. Inviscid Euler simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle shows that the algorithm is capable of simultaneously tracking the very small disturbances of the acoustic wave and capturing the shock wave. Navier-Stokes simulations are made for fully developed laminar flow in a square duct, developing laminar flow in a

  4. Resolving the Effects of Aperture and Volume Restriction of the Flow by Semi-Porous Barriers Using Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Chatziefstratiou, Efthalia K.; Velissariou, Vasilia; Bohrer, Gil

    2014-09-01

    The Regional Atmospheric Modelling System (RAMS)-based Forest Large-Eddy Simulation (RAFLES) model is used to simulate the effects of large rectangular prism-shaped semi-porous barriers of varying densities under neutrally buoyant conditions. RAFLES model resolves flows inside and above forested canopies and other semi-porous barriers, and it accounts for barrier-induced drag on the flow and surface flux exchange between the barrier and the air. Unlike most other models, RAFLES model also accounts for the barrier-induced volume and aperture restriction via a modified version of the cut-cell coordinate system. We explicitly tested the effects of the numerical representation of volume restriction, independent of the effects of the drag, by comparing drag-only simulations (where we prescribed neither volume nor aperture restrictions to the flow), restriction-only simulations (where we prescribed no drag), and control simulations where both drag and volume plus aperture restrictions were included. Previous modelling and empirical work have revealed the development of important areas of increased uplift upwind of forward-facing steps, and recirculation zones downwind of backward-facing steps. Our simulations show that representation of the effects of the volume and aperture restriction due to the presence of semi-porous barriers leads to differences in the strengths and locations of increased-updraft and recirculation zones, and the length and strength of impact and adjustment zones when compared to simulation solutions with a drag-only representation. These are mostly driven by differences to the momentum budget of the streamwise wind velocity by resolved turbulence and pressure gradient fields around the front and back edges of the barrier. We propose that volume plus aperture restriction is an important component of the flow system in semi-porous environments such as forests and cities and should be considered by large-eddy simulation (LES).

  5. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  6. Characteristics of 1d spectra in finite-volume large-eddy simulations with `one-dimensional turbulence' subgrid closure

    NASA Astrophysics Data System (ADS)

    McDermott, Randy

    2005-11-01

    In this talk we illuminate the reasons behind curious characteristics of the one-dimensional (1d) spectra for coupled `one-dimensional turbulence' (ODT) and large-eddy simulations (LES) and propose a means of correcting the ``spectral dip'' in the ODT transverse 1d spectrum. When the ODT model of Kerstein et al. [JFM 2000] is used as a subgrid closure for LES the characteristics of the three-dimensional (3d) LES spectrum significantly impact the shape of the ODT 1d spectra in the wavenumber range close to the LES grid Nyquist limit. For isotropic fields the 1d spectra (e.g., E22(k1)) will contain contributions from the 3d spectrum, E(k), from wavenumbers k = k1 to k = infinity. If the LES field is filtered using a spectral cutoff, Gaussian, or box filter then the attenuation of the 3d spectrum at high wavenumbers produces a ``spectral dip'' in the ODT 1d spectrum near the LES Nyquist limit. This problem can be alleviated by using a different LES filter kernel. Fortuitously, the resulting shape (i.e., ``implied filter'') of the 3d spectra produced by the Harlow and Welch numerical method [Phys. Fluids 1965] (i.e., second-order staggered energy conserving scheme without explicit filtering) eliminates the dip problem.

  7. Large-eddy simulations of 3D Taylor-Green vortex: comparison of Smoothed Particle Hydrodynamics, Lattice Boltzmann and Finite Volume methods

    NASA Astrophysics Data System (ADS)

    Kajzer, A.; Pozorski, J.; Szewc, K.

    2014-08-01

    In the paper we present Large-eddy simulation (LES) results of 3D Taylor- Green vortex obtained by the three different computational approaches: Smoothed Particle Hydrodynamics (SPH), Lattice Boltzmann Method (LBM) and Finite Volume Method (FVM). The Smagorinsky model was chosen as a subgrid-scale closure in LES for all considered methods and a selection of spatial resolutions have been investigated. The SPH and LBM computations have been carried out with the use of the in-house codes executed on GPU and compared, for validation purposes, with the FVM results obtained using the open-source CFD software OpenFOAM. A comparative study in terms of one-point statistics and turbulent energy spectra shows a good agreement of LES results for all methods. An analysis of the GPU code efficiency and implementation difficulties has been made. It is shown that both SPH and LBM may offer a significant advantage over mesh-based CFD methods.

  8. Large Eddy Simulation of Bubbly Flow and Slag Layer Behavior in Ladle with Discrete Phase Model (DPM)-Volume of Fluid (VOF) Coupled Model

    NASA Astrophysics Data System (ADS)

    Li, Linmin; Liu, Zhongqiu; Cao, Maoxue; Li, Baokuan

    2015-07-01

    In the ladle metallurgy process, the bubble movement and slag layer behavior is very important to the refining process and steel quality. For the bubble-liquid flow, bubble movement plays a significant role in the phase structure and causes the unsteady complex turbulent flow pattern. This is one of the most crucial shortcomings of the current two-fluid models. In the current work, a one-third scale water model is established to investigate the bubble movement and the slag open-eye formation. A new mathematical model using the large eddy simulation (LES) is developed for the bubble-liquid-slag-air four-phase flow in the ladle. The Eulerian volume of fluid (VOF) model is used for tracking the liquid-slag-air free surfaces and the Lagrangian discrete phase model (DPM) is used for describing the bubble movement. The turbulent liquid flow is induced by bubble-liquid interactions and is solved by LES. The procedure of bubble coming out of the liquid and getting into the air is modeled using a user-defined function. The results show that the present LES-DPM-VOF coupled model is good at predicting the unsteady bubble movement, slag eye formation, interface fluctuation, and slag entrainment.

  9. Volume Rendering of AMR Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Pomarède, D.; Chapon, D.; Teyssier, R.; Bournaud, F.; Renaud, F.; Grandjouan, N.

    2013-04-01

    High-resolution simulations often rely on the Adaptive Mesh Resolution (AMR) technique to optimize memory consumption versus attainable precision. While this technique allows for dramatic improvements in terms of computing performance, the analysis and visualization of its data outputs remain challenging. The lack of effective volume renderers for the octree-based AMR used by the RAMSES simulation program has led to the development of the solutions presented in this paper. Two custom algorithms are discussed, based on the splatting and the ray-casting techniques. Their usage is illustrated in the context of the visualization of a high-resolution, 6000-processor simulation of a Milky Way-like galaxy. Performance obtained in terms of memory management and parallelism speedup are presented.

  10. LARGE BUILDING HVAC SIMULATION

    EPA Science Inventory

    The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...

  11. Large volume axionic Swiss cheese inflation

    NASA Astrophysics Data System (ADS)

    Misra, Aalok; Shukla, Pramod

    2008-09-01

    Continuing with the ideas of (Section 4 of) [A. Misra, P. Shukla, Moduli stabilization, large-volume dS minimum without anti-D3-branes, (non-)supersymmetric black hole attractors and two-parameter Swiss cheese Calabi Yau's, arXiv: 0707.0105 [hep-th], Nucl. Phys. B, in press], after inclusion of perturbative and non-perturbative α corrections to the Kähler potential and (D1- and D3-) instanton generated superpotential, we show the possibility of slow roll axionic inflation in the large volume limit of Swiss cheese Calabi Yau orientifold compactifications of type IIB string theory. We also include one- and two-loop corrections to the Kähler potential but find the same to be subdominant to the (perturbative and non-perturbative) α corrections. The NS NS axions provide a flat direction for slow roll inflation to proceed from a saddle point to the nearest dS minimum.

  12. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  13. Large volume flow-through scintillating detector

    DOEpatents

    Gritzo, Russ E.; Fowler, Malcolm M.

    1995-01-01

    A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

  14. Large mode-volume, large beta, photonic crystal laser resonator

    SciTech Connect

    Dezfouli, Mohsen Kamandar; Dignam, Marc M.

    2014-12-15

    We propose an optical resonator formed from the coupling of 13, L2 defects in a triangular-lattice photonic crystal slab. Using a tight-binding formalism, we optimized the coupled-defect cavity design to obtain a resonator with predicted single-mode operation, a mode volume five times that of an L2-cavity mode and a beta factor of 0.39. The results are confirmed using finite-difference time domain simulations. This resonator is very promising for use as a single mode photonic crystal vertical-cavity surface-emitting laser with high saturation output power compared to a laser consisting of one of the single-defect cavities.

  15. Comments on large-N volume independence

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2010-06-02

    We study aspects of the large-N volume independence on R{sup 3} X L{sup {Gamma}}, where L{sup {Gamma}} is a {Gamma}site lattice for Yang-Mills theory with adjoint Wilson-fermions. We find the critical number of lattice sites above which the center-symmetry analysis on L{sup {Gamma}} agrees with the one on the continuum S{sup 1}. For Wilson parameter set to one and {Gamma}{>=}2, the two analyses agree. One-loop radiative corrections to Wilson-line masses are finite, reminiscent of the UV-insensitivity of the Higgs mass in deconstruction/Little-Higgs theories. Even for theories with {Gamma}=1, volume independence in QCD(adj) may be guaranteed to work by tuning one low-energy effective field theory parameter. Within the parameter space of the theory, at most three operators of the 3d effective field theory exhibit one-loop UV-sensitivity. This opens the analytical prospect to study 4d non-perturbative physics by using lower dimensional field theories (d=3, in our example).

  16. SUSY's Ladder: reframing sequestering at Large Volume

    NASA Astrophysics Data System (ADS)

    Reece, Matthew; Xue, Wei

    2016-04-01

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague other supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. This gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.

  17. A new large-volume multianvil system

    NASA Astrophysics Data System (ADS)

    Frost, D. J.; Poe, B. T.; Trønnes, R. G.; Liebske, C.; Duba, A.; Rubie, D. C.

    2004-06-01

    A scaled-up version of the 6-8 Kwai-type multianvil apparatus has been developed at the Bayerisches Geoinstitut for operation over ranges of pressure and temperature attainable in conventional systems but with much larger sample volumes. This split-cylinder multianvil system is used with a hydraulic press that can generate loads of up to 5000 t (50 MN). The six tool-steel outer-anvils define a cubic cavity of 100 mm edge-length in which eight 54 mm tungsten carbide cubic inner-anvils are compressed. Experiments are performed using Cr 2O 3-doped MgO octahedra and pyrophyllite gaskets. Pressure calibrations at room temperature and high temperature have been performed with 14/8, 18/8, 18/11, 25/17 and 25/15 OEL/TEL (octahedral edge-length/anvil truncation edge-length, in millimetre) configurations. All configurations tested reach a limiting plateau where the sample-pressure no longer increases with applied load. Calibrations with different configurations show that greater sample-pressure efficiency can be achieved by increasing the OEL/TEL ratio. With the 18/8 configuration the GaP transition is reached at a load of 2500 t whereas using the 14/8 assembly this pressure cannot be reached even at substantially higher loads. With an applied load of 2000 t the 18/8 can produce MgSiO 3 perovskite at 1900 °C with a sample volume of ˜20 mm 3, compared with <3 mm 3 in conventional multianvil systems at the same conditions. The large octahedron size and use of a stepped LaCrO 3 heater also results in significantly lower thermal gradients over the sample.

  18. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  19. Performance testing of a large volume calorimeter

    SciTech Connect

    Bracken, D. S.

    2004-01-01

    Calorimetry is used as a nondestructive assay technique for determining the power output of heat-producing nuclear materials. Calorimetric assay of plutonium-bearing and tritium items routinely obtains the highest precision and accuracy of all nondestructive assay (NDA) techniques, and the power calibration can be traceable to National Institute of Standards and Technology through certified electrical standards. Because the heat-measurement result is completely independent of material and matrix type, it can be reliably used on any material form or item matrix. The calorimetry measurement is combined with isotopic composition information to determine the correct plutonium content of an item. When an item is unsuitable for neutron or gamma-ray NDA, calorimetric assay is used. Currently, the largest calorimeter capable of measuring plutonium-bearing or tritium items is 36 cm in diameter and 61 cm long. Fabrication of a high-sensitivity large volume calorimeter (LVC) capable of measuring tritium and plutonium-bearing items in 208-1 (55-gal) shipping or storage containers has provided a reliable NDA method to measure many difficult to measure forms of plutonium and tritium more accurately. This large calo rimeter can also be used to make secondary working standards from process material for the calibration of faster NDA assay techniques. The footprint of the calorimeter is 104 cm wide by 157 cm deep and 196 cm high in the closed position. The space for a standard electronics rack is also necessary for the operation of the calo rimeter. The maximum item size that can be measured in the LVC is 62 cm in diameter and 100 cm long. The extensive use of heat-flow calorimeters for safeguards-related measurements at DOE facilities makes it important to extend the capability of calorimetric assay of plutonium and tritium items to larger container sizes. Measurement times, precision, measurement threshold, and position sensitivity of the instrument will be discussed.

  20. Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces

    SciTech Connect

    Makarov, A. N.

    2015-09-15

    Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.

  1. Partial volume simulation in software breast phantoms

    NASA Astrophysics Data System (ADS)

    Chen, Feiyu; Pokrajac, David; Shi, Xiquan; Liu, Fengshan; Maidment, Andrew D. A.; Bakic, Predrag R.

    2012-03-01

    A modification to our previous simulation of breast anatomy is proposed, in order to improve the quality of simulated projections generated using software breast phantoms. Anthropomorphic software breast phantoms have been used for quantitative validation of breast imaging systems. Previously, we developed a novel algorithm for breast anatomy simulation, which did not account for the partial volume (PV) of various tissues in a voxel; instead, each phantom voxel was assumed to contain single tissue type. As a result, phantom projection images displayed notable artifacts near the borders between regions of different materials, particularly at the skin-air boundary. These artifacts diminished the realism of phantom images. One solution is to simulate smaller voxels. Reducing voxel size, however, extends the phantom generation time and increases memory requirements. We achieved an improvement in image quality without reducing voxel size by the simulation of PV in voxels containing more than one simulated tissue type. The linear x-ray attenuation coefficient of each voxel is calculated by combining attenuation coefficients proportional to the voxel subvolumes occupied by the various tissues. A local planar approximation of the boundary surface is employed, and the skin volume in each voxel is computed by decomposition into simple geometric shapes. An efficient encoding scheme is proposed for the type and proportion of simulated tissues in each voxel. We illustrate the proposed methodology on phantom slices and simulated mammographic projections. Our results show that the PV simulation has improved image quality by reducing quantization artifacts.

  2. Lagrangian volume deformations around simulated galaxies

    NASA Astrophysics Data System (ADS)

    Robles, S.; Domínguez-Tenreiro, R.; Oñorbe, J.; Martínez-Serrano, F. J.

    2015-07-01

    We present a detailed analysis of the local evolution of 206 Lagrangian Volumes (LVs) selected at high redshift around galaxy seeds, identified in a large-volume Λ cold dark matter (ΛCDM) hydrodynamical simulation. The LVs have a mass range of 1-1500 × 1010 M⊙. We follow the dynamical evolution of the density field inside these initially spherical LVs from z = 10 up to zlow = 0.05, witnessing highly non-linear, anisotropic mass rearrangements within them, leading to the emergence of the local cosmic web (CW). These mass arrangements have been analysed in terms of the reduced inertia tensor I_{ij}^r, focusing on the evolution of the principal axes of inertia and their corresponding eigendirections, and paying particular attention to the times when the evolution of these two structural elements declines. In addition, mass and component effects along this process have also been investigated. We have found that deformations are led by dark matter dynamics and they transform most of the initially spherical LVs into prolate shapes, i.e. filamentary structures. An analysis of the individual freezing-out time distributions for shapes and eigendirections shows that first most of the LVs fix their three axes of symmetry (like a skeleton) early on, while accretion flows towards them still continue. Very remarkably, we have found that more massive LVs fix their skeleton earlier on than less massive ones. We briefly discuss the astrophysical implications our findings could have, including the galaxy mass-morphology relation and the effects on the galaxy-galaxy merger parameter space, among others.

  3. Finite volume hydromechanical simulation in porous media

    NASA Astrophysics Data System (ADS)

    Nordbotten, Jan Martin

    2014-05-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media.

  4. Finite volume hydromechanical simulation in porous media

    PubMed Central

    Nordbotten, Jan Martin

    2014-01-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media. PMID:25574061

  5. Analysis of volume holographic storage allowing large-angle illumination

    NASA Astrophysics Data System (ADS)

    Shamir, Joseph

    2005-05-01

    Advanced technological developments have stimulated renewed interest in volume holography for applications such as information storage and wavelength multiplexing for communications and laser beam shaping. In these and many other applications, the information-carrying wave fronts usually possess narrow spatial-frequency bands, although they may propagate at large angles with respect to each other or a preferred optical axis. Conventional analytic methods are not capable of properly analyzing the optical architectures involved. For mitigation of the analytic difficulties, a novel approximation is introduced to treat narrow spatial-frequency band wave fronts propagating at large angles. This approximation is incorporated into the analysis of volume holography based on a plane-wave decomposition and Fourier analysis. As a result of the analysis, the recently introduced generalized Bragg selectivity is rederived for this more general case and is shown to provide enhanced performance for the above indicated applications. The power of the new theoretical description is demonstrated with the help of specific examples and computer simulations. The simulations reveal some interesting effects, such as coherent motion blur, that were predicted in an earlier publication.

  6. Large space systems technology, 1980, volume 1

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    The technological and developmental efforts in support of the large space systems technology are described. Three major areas of interests are emphasized: (1) technology pertient to large antenna systems; (2) technology related to large space systems; and (3) activities that support both antenna and platform systems.

  7. Large-volume sampling and preconcentration for trace explosives detection.

    SciTech Connect

    Linker, Kevin Lane

    2004-05-01

    A trace explosives detection system typically contains three subsystems: sample collection, preconcentration, and detection. Sample collection of trace explosives (vapor and particulate) through large volumes of airflow helps reduce sampling time while increasing the amount of dilute sample collected. Preconcentration of the collected sample before introduction into the detector improves the sensitivity of the detector because of the increase in sample concentration. By combining large-volume sample collection and preconcentration, an improvement in the detection of explosives is possible. Large-volume sampling and preconcentration is presented using a systems level approach. In addition, the engineering of large-volume sampling and preconcentration for the trace detection of explosives is explained.

  8. SPS large array simulation. [spacetennas

    NASA Technical Reports Server (NTRS)

    Rathjen, S.; Sperber, B. R.; Nalos, E. J.

    1980-01-01

    Three types of computer simulations were developed to study the SPS microwave power transmission system (MPTS). The radially symmetric array simulation is low cost and is utilized to investigate general overall characteristics of the spacetenna at the array level only. "Tiltmain", a subarray level simulation program, is used to study the effects of system errors which modify the far-field pattern. The most recently designed program, "Modmain," takes the detail of simulation down to the RF module level and so to date is the closest numerical model of the reference design.

  9. The persistence of the large volumes in black holes

    NASA Astrophysics Data System (ADS)

    Ong, Yen Chin

    2015-08-01

    Classically, black holes admit maximal interior volumes that grow asymptotically linearly in time. We show that such volumes remain large when Hawking evaporation is taken into account. Even if a charged black hole approaches the extremal limit during this evolution, its volume continues to grow; although an exactly extremal black hole does not have a "large interior". We clarify this point and discuss the implications of our results to the information loss and firewall paradoxes.

  10. Technologies for imaging neural activity in large volumes.

    PubMed

    Ji, Na; Freeman, Jeremy; Smith, Spencer L

    2016-08-26

    Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Conventional microscopy collects data from individual planes and cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point-spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for processing and analyzing volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics and helping elucidate how brain regions work in concert to support behavior. PMID:27571194

  11. Large volume continuous counterflow dialyzer has high efficiency

    NASA Technical Reports Server (NTRS)

    Mandeles, S.; Woods, E. C.

    1967-01-01

    Dialyzer separates macromolecules from small molecules in large volumes of solution. It takes advantage of the high area/volume ratio in commercially available 1/4-inch dialysis tubing and maintains a high concentration gradient at the dialyzing surface by counterflow.

  12. Large-Volume Gravid Traps Enhance Collection of Culex Vectors.

    PubMed

    Popko, David A; Walton, William E

    2016-06-01

    Gravid mosquito collections were compared among several large-volume (infusion volume ≥35 liters) gravid trap designs and the small-volume (infusion volume  =  6 liters) Centers for Disease Control and Prevention (CDC) gravid trap used routinely by vector control districts for vector and pathogen surveillance. The numbers of gravid Culex quinquefasciatus, Cx. tarsalis, and Cx. stigmatosoma collected by large gravid traps were greater than by the CDC gravid trap during nearly all overnight trials. Large-volume gravid traps collected on average 6.6-fold more adult female Culex mosquitoes compared to small-volume CDC gravid traps across 3 seasons during the 3 years of the studies. The differences in gravid mosquito collections between large-versus small-volume gravid traps were greatest during spring, when 8- to 56-fold more Culex individuals were collected using large-volume gravid traps. The proportion of gravid females in collections did not differ appreciably among the more effective trap designs tested. Important determinants of gravid trap performance were infusion container size and type as well as infusion volume, which determined the distance between the suction trap and the infusion surface. Of lesser importance for gravid trap performance were the number of suction traps, method of suction trap mounting, and infusion concentration. Fermentation of infusions between 1 and 4 wk weakly affected total mosquito collections, with Cx. stigmatosoma collections moderately enhanced by comparatively young and organically enriched infusions. A suction trap mounted above 100 liters of organic infusion housed in a 121-liter black plastic container collected the most gravid mosquitoes over the greatest range of experimental conditions, and a 35-liter infusion with side-mounted suction traps was a promising lesser-volume alternative design. PMID:27280347

  13. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  14. Large volume behaviour of Yang-Mills propagators

    SciTech Connect

    Fischer, Christian S. Maas, Axel; Pawlowski, Jan M.; Smekal, Lorenz von

    2007-12-15

    We investigate finite volume effects in the propagators of Landau gauge Yang-Mills theory using Dyson-Schwinger equations on a 4-dimensional torus. In particular, we demonstrate explicitly how the solutions for the gluon and the ghost propagator tend towards their respective infinite volume forms in the corresponding limit. This solves an important open problem of previous studies where the infinite volume limit led to an apparent mismatch, especially of the infrared behaviour, between torus extrapolations and the existing infinite volume solutions obtained in 4-dimensional Euclidean space-time. However, the correct infinite volume limit is approached rather slowly. The typical scales necessary to see the onset of the leading infrared behaviour emerging already imply volumes of at least 10-15 fm in lengths. To reliably extract the infrared exponents of the infinite volume solutions requires even much larger ones. While the volumes in the Monte-Carlo simulations available at present are far too small to facilitate that, we obtain a good qualitative agreement of our torus solutions with recent lattice data in comparable volumes.

  15. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  16. Large-Volume High-Pressure Mineral Physics in Japan

    NASA Astrophysics Data System (ADS)

    Liebermann, Robert C.; Prewitt, Charles T.; Weidner, Donald J.

    American high-pressure research with large sample volumes developed rapidly in the 1950s during the race to produce synthetic diamonds. At that time the piston cylinder, girdle (or belt), and tetrahedral anvil devices were invented. However, this development essentially stopped in the late 1950s, and while the diamond anvil cell has been used extensively in the United States with spectacular success for high-pressure experiments in small sample volumes, most of the significant technological advances in large-volume devices have taken place in Japan. Over the past 25 years, these technical advances have enabled a fourfold increase in pressure, with many important investigations of the chemical and physical properties of materials synthesized at high temperatures and pressures that cannot be duplicated with any apparatus currently available in the United States.

  17. Multiphase control volume finite element simulations of fractured reservoirs

    NASA Astrophysics Data System (ADS)

    Fu, Yao

    With rapid evolution of hardware and software techniques in energy sector, reservoir simulation has become a powerful tool for field development planning and reservoir management. Many of the widely used commercial simulators were originally designed for structured grids and implemented with finite difference method (FDM). In recent years, technical advances in griding, fluid modeling, linear solver, reservoir and geological modeling, etc. have created new opportunities. At the same time, new reservoir simulation technology is required for solving large-scale heterogeneous problems. A three-dimensional, three-phase black-oil reservoir simulator has been developed using the control volume finite element (CVFE) formulation. Flux-based upstream weighting is employed to ensure flux continuity. The CVFE method is embedded in a fully-implicit formulation. State-of-the-art parallel, linear solvers are used. The implementation takes the advantages of object-oriented programming capabilities of C++ to provide maximum reuse and extensibility for future students. The results from the simulator have excellent agreement with those from commercial simulators. The convergence properties of the new simulator are verified using the method of manufactured solutions. The pressure and saturation solutions are verified to be first-order convergent as expected. The efficiency of the simulators and their capability to handle real large-scale field models are improved by implementing the models in parallel. Another aspect of the work dealt with multiphase flow of fractured reservoirs was performed. The discrete-fracture model is implemented in the simulator. Fractures and faults are represented by lines and planes in two- and three-dimensional spaces, respectively. The difficult task of generating an unstructured mesh for complex domains with fractures and faults is accomplished in this study. Applications of this model for two-phase and three-phase simulations in a variety of fractured

  18. Indian LSSC (Large Space Simulation Chamber) facility

    NASA Technical Reports Server (NTRS)

    Brar, A. S.; Prasadarao, V. S.; Gambhir, R. D.; Chandramouli, M.

    1988-01-01

    The Indian Space Agency has undertaken a major project to acquire in-house capability for thermal and vacuum testing of large satellites. This Large Space Simulation Chamber (LSSC) facility will be located in Bangalore and is to be operational in 1989. The facility is capable of providing 4 meter diameter solar simulation with provision to expand to 4.5 meter diameter at a later date. With such provisions as controlled variations of shroud temperatures and availability of infrared equipment as alternative sources of thermal radiation, this facility will be amongst the finest anywhere. The major design concept and major aspects of the LSSC facility are presented here.

  19. Molecular dynamics simulations of large macromolecular complexes

    PubMed Central

    Perilla, Juan R.; Goh, Boon Chong; Cassidy, C. Keith; Liu, Bo; Bernardi, Rafael C.; Rudack, Till; Yu, Hang; Wu, Zhe; Schulten, Klaus

    2015-01-01

    Connecting dynamics to structural data from diverse experimental sources, molecular dynamics simulations permit the exploration of biological phenomena in unparalleled detail. Advances in simulations are moving the atomic resolution descriptions of biological systems into the million-to-billion atom regime, in which numerous cell functions reside. In this opinion, we review the progress, driven by large-scale molecular dynamics simulations, in the study of viruses, ribosomes, bioenergetic systems, and other diverse applications. These examples highlight the utility of molecular dynamics simulations in the critical task of relating atomic detail to the function of supramolecular complexes, a task that cannot be achieved by smaller-scale simulations or existing experimental approaches alone. PMID:25845770

  20. Large discharge-volume, silent discharge spark plug

    DOEpatents

    Kang, Michael

    1995-01-01

    A large discharge-volume spark plug for providing self-limiting microdischarges. The apparatus includes a generally spark plug-shaped arrangement of a pair of electrodes, where either of the two coaxial electrodes is substantially shielded by a dielectric barrier from a direct discharge from the other electrode, the unshielded electrode and the dielectric barrier forming an annular volume in which self-terminating microdischarges occur when alternating high voltage is applied to the center electrode. The large area over which the discharges occur, and the large number of possible discharges within the period of an engine cycle, make the present silent discharge plasma spark plug suitable for use as an ignition source for engines. In the situation, where a single discharge is effective in causing ignition of the combustible gases, a conventional single-polarity, single-pulse, spark plug voltage supply may be used.

  1. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  2. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  3. Spatial considerations during cryopreservation of a large volume sample.

    PubMed

    Kilbride, Peter; Lamb, Stephen; Milne, Stuart; Gibbons, Stephanie; Erro, Eloy; Bundy, James; Selden, Clare; Fuller, Barry; Morris, John

    2016-08-01

    There have been relatively few studies on the implications of the physical conditions experienced by cells during large volume (litres) cryopreservation - most studies have focused on the problem of cryopreservation of smaller volumes, typically up to 2 ml. This study explores the effects of ice growth by progressive solidification, generally seen during larger scale cryopreservation, on encapsulated liver hepatocyte spheroids, and it develops a method to reliably sample different regions across the frozen cores of samples experiencing progressive solidification. These issues are examined in the context of a Bioartificial Liver Device which requires cryopreservation of a 2 L volume in a strict cylindrical geometry for optimal clinical delivery. Progressive solidification cannot be avoided in this arrangement. In such a system optimal cryoprotectant concentrations and cooling rates are known. However, applying these parameters to a large volume is challenging due to the thermal mass and subsequent thermal lag. The specific impact of this to the cryopreservation outcome is required. Under conditions of progressive solidification, the spatial location of Encapsulated Liver Spheroids had a strong impact on post-thaw recovery. Cells in areas first and last to solidify demonstrated significantly impaired post-thaw function, whereas areas solidifying through the majority of the process exhibited higher post-thaw outcome. It was also found that samples where the ice thawed more rapidly had greater post-thaw viability 24 h post-thaw (75.7 ± 3.9% and 62.0 ± 7.2% respectively). These findings have implications for the cryopreservation of large volumes with a rigid shape and for the cryopreservation of a Bioartificial Liver Device. PMID:27256662

  4. Large volume multiple-path nuclear pumped laser

    SciTech Connect

    Hohl, F.; Deyoung, R.J.

    1981-11-01

    Large volumes of gas are excited by using internal high reflectance mirrors that are arranged so that the optical path crosses back and forth through the excited gaseous medium. By adjusting the external dielectric mirrors of the laser, the number of paths through the laser cavity can be varied. Output powers were obtained that are substantially higher than the output powers of previous nuclear laser systems. Official Gazette of the U.S. Patent and Trademark Office

  5. AdS/CFT and Large-N Volume Independence

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2010-08-26

    We study the Eguchi-Kawai reduction in the strong-coupling domain of gauge theories via the gravity dual of N=4 super-Yang-Mills on R{sup 3} x S{sup 1}. We show that D-branes geometrize volume independence in the center-symmetric vacuum and give supergravity predictions for the range of validity of reduced large-N models at strong coupling.

  6. Large eddy simulation in the ocean

    NASA Astrophysics Data System (ADS)

    Scotti, Alberto

    2010-12-01

    Large eddy simulation (LES) is a relative newcomer to oceanography. In this review, both applications of traditional LES to oceanic flows and new oceanic LES still in an early stage of development are discussed. The survey covers LES applied to boundary layer flows, traditionally an area where LES has provided considerable insight into the physics of the flow, as well as more innovative applications, where new SGS closure schemes need to be developed. The merging of LES with large-scale models is also briefly reviewed.

  7. Large eddy simulations of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1995-01-01

    An evaluation of existing models for Large Eddy Simulations (LES) of incompressible turbulent flows has been completed. LES is a computation in which the large, energy-carrying structures to momentum and energy transfer is computed exactly, and only the effect of the smallest scales of turbulence is modeled. That is, the large eddies are computed and the smaller eddies are modeled. The dynamics of the largest eddies are believed to account for most of sound generation and transport properties in a turbulent flow. LES analysis is based on an observation that pressure, velocity, temperature, and other variables are the sum of their large-scale and small-scale parts. For instance, u(i) (velocity) can be written as the sum of bar-u(i) and u(i)-prime, where bar-u(i) is the large-scale and u(i)-prime is the subgrid-scale (SGS). The governing equations for large eddies in compressible flows are obtained after filtering the continuity, momentum, and energy equations, and recasting in terms of Favre averages. The filtering operation maintains only large scales. The effects of the small-scales are present in the governing equations through the SGS stress tensor tau(ij) and SGS heat flux q(i). The mathematical formulation of the Favre-averaged equations of motion for LES is complete.

  8. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  9. Simulating Pressure Effects of High-Flow Volumes

    NASA Technical Reports Server (NTRS)

    Kaufman, M.

    1985-01-01

    Dynamic test stresses realized without high-volume pumps. Assembled in Sections in gas-flow passage, contoured mandrel restricts flow rate to valve convenient for testing and spatially varies pressure on passage walls to simulate operating-pressure profile. Realistic test pressures thereby achieved without extremely high flow volumes.

  10. Concentration of coliphages from large volumes of water and wastewater.

    PubMed Central

    Goyal, S M; Zerda, K S; Gerba, C P

    1980-01-01

    Membrane filter adsorption-elution technology has been extensively used for the concentration and detection of animal viruses from large volumes of water. This study describes the development of positively charged microporous filters (Zeta Plus) for the concentration of coliphages from large volumes of water and wastewater. Four different coliphages were studied: MS-2, phi X174, T2, and T4. Positively charged microporous filters were found to efficiently adsorb these coliphages from tap water, sewage, and lake water at neutral pH. Adsorbed viruses were eluted with a 1:1 mixture of 8% beef extract and 1 M sodium chloride at pH 9. Using this method, coliphages could be concentrated from 17-liter volumes of tap water with recoveries ranging from 34 to 100%. Coliphages occurring naturally in raw and secondarily treated sewage were recovered with average efficiencies of 56.5 and 55.0%, respectively. This method should be useful in isolation of rare phages, the ecology of phages in natural waters, and the evaluation of water quality. PMID:7356323

  11. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  12. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  13. Simulations of Large Scale Structures in Cosmology

    NASA Astrophysics Data System (ADS)

    Liao, Shihong

    Large-scale structures are powerful probes for cosmology. Due to the long range and non-linear nature of gravity, the formation of cosmological structures is a very complicated problem. The only known viable solution is cosmological N-body simulations. In this thesis, we use cosmological N-body simulations to study structure formation, particularly dark matter haloes' angular momenta and dark matter velocity field. The origin and evolution of angular momenta is an important ingredient for the formation and evolution of haloes and galaxies. We study the time evolution of the empirical angular momentum - mass relation for haloes to offer a more complete picture about its origin, dependences on cosmological models and nonlinear evolutions. We also show that haloes follow a simple universal specific angular momentum profile, which is useful in modelling haloes' angular momenta. The dark matter velocity field will become a powerful cosmological probe in the coming decades. However, theoretical predictions of the velocity field rely on N-body simulations and thus may be affected by numerical artefacts (e.g. finite box size, softening length and initial conditions). We study how such numerical effects affect the predicted pairwise velocities, and we propose a theoretical framework to understand and correct them. Our results will be useful for accurately comparing N-body simulations to observational data of pairwise velocities.

  14. Large eddy simulations in 2030 and beyond

    PubMed Central

    Piomelli, U

    2014-01-01

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier–Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise. PMID:25024415

  15. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05 simulations, in particular, allow us to also study the role and impact of the nuclear symmetry energy on these pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  16. The Large Area Pulsed Solar Simulator (LAPSS)

    NASA Technical Reports Server (NTRS)

    Mueller, R. L.

    1993-01-01

    A Large Area Pulsed Solar Simulator (LAPSS) has been installed at JPL. It is primarily intended to be used to illuminate and measure the electrical performance of photovoltaic devices. The simulator, originally manufactured by Spectrolab, Sylmar, California, occupies an area measuring about 3 meters wide by 12 meters long. The data acquisition and data processing subsystems have been modernized. Tests on the LAPSS performance resulted in better than +/- 2 percent uniformity of irradiance at the test plane and better than +/- 0.3 percent measurement repeatability after warm-up. Glass absorption filters are used to reduce the level of ultraviolet light emitted from the xenon flash lamps. This provides a close match to standard airmass zero and airmass 1.5 spectral irradiance distributions. The 2 millisecond light pulse prevents heating of the device under test, resulting in more reliable temperature measurements. Overall, excellent electrical performance measurements have been made of many different types and sizes of photovoltaic devices.

  17. Large-Volume Microfluidic Cell Sorting for Biomedical Applications.

    PubMed

    Warkiani, Majid Ebrahimi; Wu, Lidan; Tay, Andy Kah Ping; Han, Jongyoon

    2015-01-01

    Microfluidic cell-separation technologies have been studied for almost two decades, but the limited throughput has restricted their impact and range of application. Recent advances in microfluidics enable high-throughput cell sorting and separation, and this has led to various novel diagnostic and therapeutic applications that previously had been impossible to implement using microfluidics technologies. In this review, we focus on recent progress made in engineering large-volume microfluidic cell-sorting methods and the new applications enabled by them. PMID:26194427

  18. Flight Simulation Model Exchange. Volume 2; Appendices

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the appendices to the main report.

  19. Flight Simulation Model Exchange. Volume 1

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the results of the assessment.

  20. The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data

    NASA Technical Reports Server (NTRS)

    Hanke, C. R.; Nordwall, D. R.

    1970-01-01

    The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.

  1. Measurement of large liquid volumes by turbine meters

    SciTech Connect

    Jakubenas, P.P.

    1996-09-01

    Traditionally the petroleum industry has used turbine meters for custody transfer measurement of large volumes of low viscosity products, but more recently, the trend is to apply turbine meters to higher viscosity fluids particularly crude oils. This trend is to a great extent prompted by analysis of initial capital outlay only, rather than considering total cost of ownership, as the initial cost of the turbine meter itself is considerably less than a positive displacement meter of equal flow capacity. However, another reason why the trend is continuing is related to technological advances. This paper will address meter selection basics, turbine meter theory and the recent technological advances that may permit the use of turbine meters for applications for which heretofore they could not be considered. Also the difficult to identify operational costs that may occur when using large turbine meters on high viscosity products will be discussed.

  2. Measurement of large liquid volumes by turbine meters

    SciTech Connect

    Jakubenas, P.P.

    1995-12-01

    Traditionally the petroleum industry has used turbine meters for custody transfer measurement of large volumes of low viscosity products, but more recently, the trend is to apply turbine meters to higher viscosity fluids particularly crude oils. This trend is to a great extent prompted by analysis of initial capital outlay only, rather than considering total cost of ownership, as the initial cost of the turbine meter itself is considerably less than a positive displacement meter of equal flow capacity. However another reason why the trend is continuing is related to technological advances. This paper will address meter selection basics, turbine meter theory and the recent technological advances that may permit the use of turbine meters for applications for which heretofore they could not be considered. Also the difficult to identify operational costs that may occur when using large turbine meters on high viscosity products will be discussed.

  3. The Large Area Pulsed Solar Simulator (LAPSS)

    NASA Technical Reports Server (NTRS)

    Mueller, R. L.

    1994-01-01

    The Large Area Pulsed Solar Simulator (LAPSS) has been installed at JPL. It is primarily intended to be used to illuminate and measure the electrical performance of photovoltaic devices. The simulator, originally manufactured by Spectrolab, Sylmar, CA, occupies an area measuring about 3 m wide x 12 m long. The data acquisition and data processing subsystems have been modernized. Tests on the LAPSS performance resulted in better than plus or minus 2 percent uniformity of irradiance at the test plane and better than plus or minus 0.3 percent measurement repeatability after warm-up. Glass absorption filters reduce the ultraviolet light emitted from the xenon flash lamps. This results in a close match to three different standard airmass zero and airmass 1.5 spectral irradiances. The 2-ms light pulse prevents heating of the device under test, resulting in more reliable temperature measurements. Overall, excellent electrical performance measurements have been made of many different types and sizes of photovoltaic devices. Since the original printing of this publication, in 1993, the LAPSS has been operational and new capabilities have been added. This revision includes a new section relating to the installation of a method to measure the I-V curve of a solar cell or array exhibiting a large effective capacitance. Another new section has been added relating to new capabilities for plotting single and multiple I-V curves, and for archiving the I-V data and test parameters. Finally, a section has been added regarding the data acquisition electronics calibration.

  4. Phase Equilibria Impetus For Large-Volume Explosive Volcanic Eruptions

    NASA Astrophysics Data System (ADS)

    Fowler, S. J.; Spera, F. J.; Bohrson, W. A.; Ghiorso, M. S.

    2006-12-01

    We have investigated the phase equilibria and associated variations in melt and magma thermodynamic and transport properties of seven large-volume silicic explosive volcanic systems through application of the MELTS (Ghiorso &Sack, 1995) algorithm. Each calculation is based on fractional crystallization along an oxygen buffer at low-pressure (0.1 - 0.3 GPa), starting from a mafic parental liquid. Site-specific geological constraints provide starting conditions for each system. We have performed calculations for seven tuffs; the Otowi (~400 km3) and Tshirege (~200 km3) members of the Bandelier Tuff, the ~600 km3 Bishop Tuff, and the 2500, 300, and 1000 km3 Yellowstone high-silica rhyolite tuffs. These represent the six largest eruptions within North America over the past ~2 million years. The seventh tuff, the 39.3 ka Campanian Ignimbrite, a 200 km3 trachytic to phonolitic ignimbrite located near Naples, Italy, is the largest explosive eruption in the Mediterranean area in the last 200 kyr. In all cases, MELTS faithfully tracks the liquid line of descent as well as the identity and composition of phenocrysts. The largest discrepancy between predicted and observed melt compositions is for CaO in all calculations. A key characteristic for each system is a pseudoinvariant temperature, Tinv, where abrupt shifts in crystallinity (1-fm, where fm is the fraction of melt), volume fraction of supercritical fluid (θ), magma compressibility, melt and magma density, and viscosity occur over a small temperature interval of order 1 - 10 K. In particular, the volume fraction of vapor increases from θ ~0.1 just below Tinv to θ >0.7 just above Tinv for each case. The rheological transition between melt-dominated (high viscosity) and bubble-dominated (low viscosity) magma occurs at θ ~0.6. We emphasize that this effect is observed under isobaric conditions and is distinct from the oft-studied phenomenon of volatile exsolution accompanying magma decompression and subsequent

  5. Large Eddy Simulation of Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Wu, Ting; Cotton, William R.

    1999-01-01

    The Regional Atmospheric Modeling System (RAMS) with mesoscale interactive nested-grids and a Large-Eddy Simulation (LES) version of RAMS, coupled to two-moment microphysics and a new two-stream radiative code were used to investigate the dynamic, microphysical, and radiative aspects of the November 26, 1991 cirrus event. Wu (1998) describes the results of that research in full detail and is enclosed as Appendix 1. The mesoscale nested grid simulation successfully reproduced the large scale circulation as compared to the Mesoscale Analysis and Prediction System's (MAPS) analyses and other observations. Three cloud bands which match nicely to the three cloud lines identified in an observational study (Mace et al., 1995) are predicted on Grid #2 of the nested grids, even though the mesoscale simulation predicts a larger west-east cloud width than what was observed. Large-eddy simulations (LES) were performed to study the dynamical, microphysical, and radiative processes in the 26 November 1991 FIRE 11 cirrus event. The LES model is based on the RAMS version 3b developed at Colorado State University. It includes a new radiation scheme developed by Harrington (1997) and a new subgrid scale model developed by Kosovic (1996). The LES model simulated a single cloud layer for Case 1 and a two-layer cloud structure for Case 2. The simulations demonstrated that latent heat release can play a significant role in the formation and development of cirrus clouds. For the thin cirrus in Case 1, the latent heat release was insufficient for the cirrus clouds to become positively buoyant. However, in some special cases such as Case 2, positively buoyant cells can be embedded within the cirrus layers. These cells were so active that the rising updraft induced its own pressure perturbations that affected the cloud evolution. Vertical profiles of the total radiative and latent heating rates indicated that for well developed, deep, and active cirrus clouds, radiative cooling and latent

  6. Large-eddy simulations with wall models

    NASA Technical Reports Server (NTRS)

    Cabot, W.

    1995-01-01

    The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.

  7. Cardiovascular simulator improvement: pressure versus volume loop assessment.

    PubMed

    Fonseca, Jeison; Andrade, Aron; Nicolosi, Denys E C; Biscegli, José F; Leme, Juliana; Legendre, Daniel; Bock, Eduardo; Lucchi, Julio Cesar

    2011-05-01

    This article presents improvement on a physical cardiovascular simulator (PCS) system. Intraventricular pressure versus intraventricular volume (PxV) loop was obtained to evaluate performance of a pulsatile chamber mimicking the human left ventricle. PxV loop shows heart contractility and is normally used to evaluate heart performance. In many heart diseases, the stroke volume decreases because of low heart contractility. This pathological situation must be simulated by the PCS in order to evaluate the assistance provided by a ventricular assist device (VAD). The PCS system is automatically controlled by a computer and is an auxiliary tool for VAD control strategies development. This PCS system is according to a Windkessel model where lumped parameters are used for cardiovascular system analysis. Peripheral resistance, arteries compliance, and fluid inertance are simulated. The simulator has an actuator with a roller screw and brushless direct current motor, and the stroke volume is regulated by the actuator displacement. Internal pressure and volume measurements are monitored to obtain the PxV loop. Left chamber internal pressure is directly obtained by pressure transducer; however, internal volume has been obtained indirectly by using a linear variable differential transformer, which senses the diaphragm displacement. Correlations between the internal volume and diaphragm position are made. LabVIEW integrates these signals and shows the pressure versus internal volume loop. The results that have been obtained from the PCS system show PxV loops at different ventricle elastances, making possible the simulation of pathological situations. A preliminary test with a pulsatile VAD attached to PCS system was made.

  8. Autonomic Closure for Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    King, Ryan; Hamlington, Peter; Dahm, Werner J. A.

    2015-11-01

    A new autonomic subgrid-scale closure has been developed for large eddy simulation (LES). The approach poses a supervised learning problem that captures nonlinear, nonlocal, and nonequilibrium turbulence effects without specifying a predefined turbulence model. By solving a regularized optimization problem on test filter scale quantities, the autonomic approach identifies a nonparametric function that represents the best local relation between subgrid stresses and resolved state variables. The optimized function is then applied at the grid scale to determine unknown LES subgrid stresses by invoking scale similarity in the inertial range. A priori tests of the autonomic approach on homogeneous isotropic turbulence show that the new approach is amenable to powerful optimization and machine learning methods and is successful for a wide range of filter scales in the inertial range. In these a priori tests, the autonomic closure substantially improves upon the dynamic Smagorinsky model in capturing the instantaneous, statistical, and energy transfer properties of the subgrid stress field.

  9. Large eddy simulation of cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, Aswin; Mahesh, Krishnan

    2014-11-01

    Large eddy simulation on unstructured grids is used to study hydrodynamic cavitation. The multiphase medium is represented using a homogeneous equilibrium model that assumes thermal equilibrium between the liquid and the vapor phase. Surface tension effects are ignored and the governing equations are the compressible Navier Stokes equations for the liquid/vapor mixture along with a transport equation for the vapor mass fraction. A characteristic-based filtering scheme is developed to handle shocks and material discontinuities in non-ideal gases and mixtures. A TVD filter is applied as a corrector step in a predictor-corrector approach with the predictor scheme being non-dissipative and symmetric. The method is validated for canonical one dimensional flows and leading edge cavitation over a hydrofoil, and applied to study sheet to cloud cavitation over a wedge. This work is supported by the Office of Naval Research.

  10. Large eddy simulation applications in gas turbines.

    PubMed

    Menzies, Kevin

    2009-07-28

    The gas turbine presents significant challenges to any computational fluid dynamics techniques. The combination of a wide range of flow phenomena with complex geometry is difficult to model in the context of Reynolds-averaged Navier-Stokes (RANS) solvers. We review the potential for large eddy simulation (LES) in modelling the flow in the different components of the gas turbine during a practical engineering design cycle. We show that while LES has demonstrated considerable promise for reliable prediction of many flows in the engine that are difficult for RANS it is not a panacea and considerable application challenges remain. However, for many flows, especially those dominated by shear layer mixing such as in combustion chambers and exhausts, LES has demonstrated a clear superiority over RANS for moderately complex geometries although at significantly higher cost which will remain an issue in making the calculations relevant within the design cycle. PMID:19531505

  11. SUSY’s Ladder: Reframing sequestering at Large Volume

    DOE PAGES

    Reece, Matthew; Xue, Wei

    2016-04-07

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague othermore » supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.« less

  12. Large eddy simulation of trailing edge noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Nitzkorski, Zane; Mahesh, Krishnan

    2015-11-01

    Noise generation is an important engineering constraint to many marine vehicles. A significant portion of the noise comes from propellers and rotors, specifically due to flow interactions at the trailing edge. Large eddy simulation is used to investigate the noise produced by a turbulent 45 degree beveled trailing edge and a NACA 0012 airfoil. A porous surface Ffowcs-Williams and Hawkings acoustic analogy is combined with a dynamic endcapping method to compute the sound. This methodology allows for the impact of incident flow noise versus the total noise to be assessed. LES results for the 45 degree beveled trailing edge are compared to experiment at M = 0 . 1 and Rec = 1 . 9 e 6 . The effect of boundary layer thickness on sound production is investigated by computing using both the experimental boundary layer thickness and a thinner boundary layer. Direct numerical simulation results of the NACA 0012 are compared to available data at M = 0 . 4 and Rec = 5 . 0 e 4 for both the hydrodynamic field and the acoustic field. Sound intensities and directivities are investigated and compared. Finally, some of the physical mechanisms of far-field noise generation, common to the two configurations, are discussed. Supported by Office of Naval research.

  13. Large eddy simulation of turbulent cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, A.; Mahesh, K.

    2015-12-01

    Large Eddy Simulation is employed to study two turbulent cavitating flows: over a cylinder and a wedge. A homogeneous mixture model is used to treat the mixture of water and water vapor as a compressible fluid. The governing equations are solved using a novel predictor- corrector method. The subgrid terms are modeled using the Dynamic Smagorinsky model. Cavitating flow over a cylinder at Reynolds number (Re) = 3900 and cavitation number (σ) = 1.0 is simulated and the wake characteristics are compared to the single phase results at the same Reynolds number. It is observed that cavitation suppresses turbulence in the near wake and delays three dimensional breakdown of the vortices. Next, cavitating flow over a wedge at Re = 200, 000 and σ = 2.0 is presented. The mean void fraction profiles obtained are compared to experiment and good agreement is obtained. Cavity auto-oscillation is observed, where the sheet cavity breaks up into a cloud cavity periodically. The results suggest LES as an attractive approach for predicting turbulent cavitating flows.

  14. High density three-dimensional localization microscopy across large volumes

    PubMed Central

    Legant, Wesley R.; Shao, Lin; Grimm, Jonathan B.; Brown, Timothy A.; Milkie, Daniel E.; Avants, Brian B.; Lavis, Luke D.; Betzig, Eric

    2016-01-01

    Extending three-dimensional (3D) single molecule localization microscopy away from the coverslip and into thicker specimens will greatly broaden its biological utility. However, localizing molecules in 3D with high precision in such samples, while simultaneously achieving the extreme labeling densities required for high resolution of densely crowded structures is challenging due to the limitations both of conventional imaging modalities and of conventional labeling techniques. Here, we combine lattice light sheet microscopy with newly developed, freely diffusing, cell permeable chemical probes with targeted affinity towards either DNA, intracellular membranes, or the plasma membrane. We use this combination to perform high localization precision, ultra-high labeling density, multicolor localization microscopy in samples up to 20 microns thick, including dividing cells and the neuromast organ of a zebrafish embryo. We also demonstrate super-resolution correlative imaging with protein specific photoactivable fluorophores, providing a mutually compatible, single platform alternative to correlative light-electron microscopy over large volumes. PMID:26950745

  15. Multisystem organ failure after large volume injection of castor oil.

    PubMed

    Smith, Silas W; Graber, Nathan M; Johnson, Rudolph C; Barr, John R; Hoffman, Robert S; Nelson, Lewis S

    2009-01-01

    We report a case of multisystem organ failure after large volume subcutaneous injection of castor oil for cosmetic enhancement. An unlicensed practitioner injected 500 mL of castor oil bilaterally to the hips and buttocks of a 28-year-old male to female transsexual. Immediate local pain and erythema were followed by abdominal and chest pain, emesis, headache, hematuria, jaundice, and tinnitus. She presented to an emergency department 12 hours postinjection. Persistently hemolyzed blood samples complicated preliminary laboratory analysis. She rapidly deteriorated despite treatment and developed fever, tachycardia, hemolysis, thrombocytopenia, hepatitis, respiratory distress, and anuric renal failure. An infectious diseases evaluation was negative. After intensive supportive care, including mechanical ventilation and hemodialysis, she was discharged 11 days later, requiring dialysis for an additional 1.5 months. Castor oil absorption was inferred from recovery of the Ricinus communis biomarker, ricinine, in the patient's urine (41 ng/mL). Clinicians should anticipate multiple complications after unapproved methods of cosmetic enhancement.

  16. Large space telescope, phase A. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Phase A study of the Large Space Telescope (LST) is reported. The study defines an LST concept based on the broad mission guidelines provided by the Office of Space Science (OSS), the scientific requirements developed by OSS with the scientific community, and an understanding of long range NASA planning current at the time the study was performed. The LST is an unmanned astronomical observatory facility, consisting of an optical telescope assembly (OTA), scientific instrument package (SIP), and a support systems module (SSM). The report consists of five volumes. The report describes the constraints and trade off analyses that were performed to arrive at a reference design for each system and for the overall LST configuration. A low cost design approach was followed in the Phase A study. This resulted in the use of standard spacecraft hardware, the provision for maintenance at the black box level, growth potential in systems designs, and the sharing of shuttle maintenance flights with other payloads.

  17. Large volume water sprays for dispersing warm fogs

    NASA Astrophysics Data System (ADS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.

    A new method for dispersing of warm fogs which impede visibility and alter schedules is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray-induced air flow; the fog droplets are removed by coalescence/rainout. The efficiency of this fog droplet removal process depends on the size spectra of the spray drops and optimum spray drop size is calculated as between 0.3-1.0 mm in diameter. Water spray tests were conducted in order to determine the drop size spectra and temperature response of sprays produced by commercially available fire-fighting nozzles, and nozzle array tests were utilized to study air flow patterns and the thermal properties of the overall system. The initial test data reveal that the fog-dispersal procedure is effective.

  18. Striped Bass, morone saxatilis, egg incubation in large volume jars

    USGS Publications Warehouse

    Harper, C.J.; Wrege, B.M.; Jeffery, Isely J.

    2010-01-01

    The standard McDonald jar was compared with a large volume jar for striped bass, Morone saxatilis, egg incubation. The McDonald jar measured 16 cm in diameter by 45 cm in height and had a volume of 6 L. The experimental jar measured 0.4 m in diameter by 1.3 m in height and had a volume of 200 L. The hypothesis is that there is no difference in percent survival of fry hatched in experimental jars compared with McDonald jars. Striped bass brood fish were collected from the Coosa River and spawned using the dry spawn method of fertilization. Four McDonald jars were stocked with approximately 150 g of eggs each. Post-hatch survival was estimated at 48, 96, and 144 h. Stocking rates resulted in an average egg loading rate (??1 SE) in McDonald jars of 21.9 ?? 0.03 eggs/mL and in experimental jars of 10.9 ?? 0.57 eggs/mL. The major finding of this study was that average fry survival was 37.3 ?? 4.49% for McDonald jars and 34.2 ?? 3.80% for experimental jars. Although survival in experimental jars was slightly less than in McDonald jars, the effect of container volume on survival to 48 h (F = 6.57; df = 1,5; P > 0.05), 96 h (F = 0.02; df = 1, 4; P > 0.89), and 144 h (F = 3.50; df = 1, 4; P > 0.13) was not statistically significant. Mean survival between replicates ranged from 14.7 to 60.1% in McDonald jars and from 10.1 to 54.4% in experimental jars. No effect of initial stocking rate on survival (t = 0.06; df = 10; P > 0.95) was detected. Experimental jars allowed for incubation of a greater number of eggs in less than half the floor space of McDonald jars. As hatchery production is often limited by space or water supply, experimental jars offer an alternative to extending spawning activities, thereby reducing labor and operations cost. As survival was similar to McDonald jars, the experimental jar is suitable for striped bass egg incubation. ?? Copyright by the World Aquaculture Society 2010.

  19. Simulation of large acceptance LINAC for muons

    SciTech Connect

    Miyadera, H; Kurennoy, S; Jason, A J

    2010-01-01

    There has been a recent need for muon accelerators not only for future Neutrino Factories and Muon Colliders but also for other applications in industry and medical use. We carried out simulations on a large-acceptance muon linac with a new concept 'mixed buncher/acceleration'. The linac can accept pions/muons from a production target with large acceptance and accelerate muon without any beam cooling which makes the initial section of muon-linac system very compact. The linac has a high impact on Neutrino Factory and Muon Collider (NF/MC) scenario since the 300-m injector section can be replaced by the muon linac of only 10-m length. The current design of the linac consists of the following components: independent 805-MHz cavity structure with 6- or 8-cm-radius aperture window; injection of a broad range of pion/muon energies, 10-100 MeV, and acceleration to 150 - 200 MeV. Further acceleration of the muon beam are relatively easy since the beam is already bunched.

  20. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  1. Large eddy simulations of laminar separation bubble

    NASA Astrophysics Data System (ADS)

    Cadieux, Francois

    The flow over blades and airfoils at moderate angles of attack and Reynolds numbers ranging from ten thousand to a few hundred thousands undergoes separation due to the adverse pressure gradient generated by surface curvature. In many cases, the separated shear layer then transitions to turbulence and reattaches, closing off a recirculation region -- the laminar separation bubble. To avoid body-fitted mesh generation problems and numerical issues, an equivalent problem for flow over a flat plate is formulated by imposing boundary conditions that lead to a pressure distribution and Reynolds number that are similar to those on airfoils. Spalart & Strelet (2000) tested a number of Reynolds-averaged Navier-Stokes (RANS) turbulence models for a laminar separation bubble flow over a flat plate. Although results with the Spalart-Allmaras turbulence model were encouraging, none of the turbulence models tested reliably recovered time-averaged direct numerical simulation (DNS) results. The purpose of this work is to assess whether large eddy simulation (LES) can more accurately and reliably recover DNS results using drastically reduced resolution -- on the order of 1% of DNS resolution which is commonly achievable for LES of turbulent channel flows. LES of a laminar separation bubble flow over a flat plate are performed using a compressible sixth-order finite-difference code and two incompressible pseudo-spectral Navier-Stokes solvers at resolutions corresponding to approximately 3% and 1% of the chosen DNS benchmark by Spalart & Strelet (2000). The finite-difference solver is found to be dissipative due to the use of a stability-enhancing filter. Its numerical dissipation is quantified and found to be comparable to the average eddy viscosity of the dynamic Smagorinsky model, making it difficult to separate the effects of filtering versus those of explicit subgrid-scale modeling. The negligible numerical dissipation of the pseudo-spectral solvers allows an unambiguous

  2. Volume visualization of multiple alignment of large genomicDNA

    SciTech Connect

    Shah, Nameeta; Dillard, Scott E.; Weber, Gunther H.; Hamann, Bernd

    2005-07-25

    Genomes of hundreds of species have been sequenced to date, and many more are being sequenced. As more and more sequence data sets become available, and as the challenge of comparing these massive ''billion basepair DNA sequences'' becomes substantial, so does the need for more powerful tools supporting the exploration of these data sets. Similarity score data used to compare aligned DNA sequences is inherently one-dimensional. One-dimensional (1D) representations of these data sets do not effectively utilize screen real estate. As a result, tools using 1D representations are incapable of providing informatory overview for extremely large data sets. We present a technique to arrange 1D data in 3D space to allow us to apply state-of-the-art interactive volume visualization techniques for data exploration. We demonstrate our technique using multi-millions-basepair-long aligned DNA sequence data and compare it with traditional 1D line plots. The results show that our technique is superior in providing an overview of entire data sets. Our technique, coupled with 1D line plots, results in effective multi-resolution visualization of very large aligned sequence data sets.

  3. Large Eddy Simulation of Powered Fontan Hemodynamics

    PubMed Central

    Delorme, Y.; Anupindi, K.; Kerlo, A.E.; Shetty, D.; Rodefeld, M.; Chen, J.; Frankel, S.

    2012-01-01

    Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2–3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3–5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a “biventricular Fontan” circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo™) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data. PMID:23177085

  4. Large-eddy simulations with a dynamic explicit vegetation model

    NASA Astrophysics Data System (ADS)

    Bohrer, G.; Maurer, K.; Chatziefstratiou, E.; Medvigy, D.

    2014-12-01

    We coupled the Regional Atmospheric Modeling System (RAMS)-based Forest Large-Eddy Simulation (RAFLES) and a modified version of the Ecosystem Demography model version 2 (ED2) to form a dynamic, high resolution, physiologically driven large eddy simulation. RAFLES represents both drag and volume restriction by the canopy over an explicit 3-D domain. We conducted a sensitivity analysis of uplift and circulation patterns at the front and back of a rectangular barrier to the representation of the canopy volume. We then used this model to perform a virtual experiment using combinations of realistic heterogeneous canopies and virtual homogenous canopies combined with heterogeneous and homogenous patterns of soil moisture to test the effects of the spatial scaling of soil moisture on the fluxes of momentum, heat, and water in heterogeneous environments at the tree-crown scale. Further simulations were performed to test the combined effects of canopy structure, soil moisture heterogeneity, and soil water availability. We found flux dynamics of momentum, heat, and water to be significantly influenced by canopy structure, soil moisture heterogeneity, and soil water availability. During non-plant-limiting soil-water conditions, we found canopy structure to be the primary driver of tree-crown scale fluxes of momentum, heat, and water, specifically through modification of the ejection sweep dynamics. However, as soil water conditions became limiting for latent heat flux from plants, tree-crown scale fluxes of momentum and heat became influenced by the spatial pattern of soil moisture, whereas soil moisture became a significant driver of tree-crown scale fluxes of water along with canopy structure.

  5. Large scale molecular simulations of nanotoxicity.

    PubMed

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells.

  6. Monte Carlo Simulations for Dosimetry in Prostate Radiotherapy with Different Intravesical Volumes and Planning Target Volume Margins

    PubMed Central

    Lv, Wei; Yu, Dong; He, Hengda; Liu, Qian

    2016-01-01

    In prostate radiotherapy, the influence of bladder volume variation on the dose absorbed by the target volume and organs at risk is significant and difficult to predict. In addition, the resolution of a typical medical image is insufficient for visualizing the bladder wall, which makes it more difficult to precisely evaluate the dose to the bladder wall. This simulation study aimed to quantitatively investigate the relationship between the dose received by organs at risk and the intravesical volume in prostate radiotherapy. The high-resolution Visible Chinese Human phantom and the finite element method were used to construct 10 pelvic models with specific intravesical volumes ranging from 100 ml to 700 ml to represent bladders of patients with different bladder filling capacities during radiotherapy. This series of models was utilized in six-field coplanar 3D conformal radiotherapy simulations with different planning target volume (PTV) margins. Each organ’s absorbed dose was calculated using the Monte Carlo method. The obtained bladder wall displacements during bladder filling were consistent with reported clinical measurements. The radiotherapy simulation revealed a linear relationship between the dose to non-targeted organs and the intravesical volume and indicated that a 10-mm PTV margin for a large bladder and a 5-mm PTV margin for a small bladder reduce the effective dose to the bladder wall to similar degrees. However, larger bladders were associated with evident protection of the intestines. Detailed dosimetry results can be used by radiation oncologists to create more accurate, individual water preload protocols according to the patient’s anatomy and bladder capacity. PMID:27441944

  7. Monte Carlo Simulations for Dosimetry in Prostate Radiotherapy with Different Intravesical Volumes and Planning Target Volume Margins.

    PubMed

    Lv, Wei; Yu, Dong; He, Hengda; Liu, Qian

    2016-01-01

    In prostate radiotherapy, the influence of bladder volume variation on the dose absorbed by the target volume and organs at risk is significant and difficult to predict. In addition, the resolution of a typical medical image is insufficient for visualizing the bladder wall, which makes it more difficult to precisely evaluate the dose to the bladder wall. This simulation study aimed to quantitatively investigate the relationship between the dose received by organs at risk and the intravesical volume in prostate radiotherapy. The high-resolution Visible Chinese Human phantom and the finite element method were used to construct 10 pelvic models with specific intravesical volumes ranging from 100 ml to 700 ml to represent bladders of patients with different bladder filling capacities during radiotherapy. This series of models was utilized in six-field coplanar 3D conformal radiotherapy simulations with different planning target volume (PTV) margins. Each organ's absorbed dose was calculated using the Monte Carlo method. The obtained bladder wall displacements during bladder filling were consistent with reported clinical measurements. The radiotherapy simulation revealed a linear relationship between the dose to non-targeted organs and the intravesical volume and indicated that a 10-mm PTV margin for a large bladder and a 5-mm PTV margin for a small bladder reduce the effective dose to the bladder wall to similar degrees. However, larger bladders were associated with evident protection of the intestines. Detailed dosimetry results can be used by radiation oncologists to create more accurate, individual water preload protocols according to the patient's anatomy and bladder capacity.

  8. Molecular Simulation of Cavity Size Distributions and Diffusivity in Ultrahigh Free Volume Glassy Polymers

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Yan; Sanchez, Isaac C.; Freeman, Benny D.

    2003-03-01

    Poly (1-trimethylsilyl-1-propane) (PTMSP) and random copolymer of tetrafluoroethylene and 2,2-bis(trifluoromethyl)-4,5-difluoro-1,3-dioxole (TFE/BDD), two very permeable polymers, have very similar and large fractional free volumes, but very different permeabilities. Cavity size (free volume) distributions obtained by Monte Carlo methods shows that PTMSP has larger cavities compared with TFE/BDD. This explains the observation that PTMSP is more permeable than TFE/BDD in an order of magnitude. Our simulation results are also qualitatively consistent with free volume distributions determined by Positron Annihilation Lifetime (PAL) Spectroscopy. The diffusion coefficient of CO2 in these two high free volume polymers was also calculated through molecular dynamics. The diffusion coefficient of CO2 in PTMSP is much higher than TFE/BDD. Our simulated diffusion data are in good agreement with the experimental data.

  9. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  10. Computer simulation of preflight blood volume reduction as a countermeasure to fluid shifts in space flight

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R.; Charles, J. B.

    1992-01-01

    Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.

  11. Preliminary Results from the Large Volume Torsion (LVT) Deformation Apparatus

    NASA Astrophysics Data System (ADS)

    Cross, A. J.; Couvy, H.; Skemer, P. A.

    2015-12-01

    We present preliminary results from the Large Volume Torsion (LVT) apparatus, currently under development in the rock deformation lab at Washington University in St. Louis. The LVT is designed to deform disk-shaped samples (~4 mm in diameter) in torsion at lower-crustal to upper-mantle pressure and temperature conditions. Conceptually, the LVT complements and is similar in design to the Rotational Drickamer Apparatus (RDA) (Yamakazi & Karato, 2001), which deforms smaller samples at higher pressures. As part of our recent development efforts, benchmarking experiments were performed on Carrara marble. Samples were deformed in torsion at a strain rate of ~5 x 10-5 s-1 to moderate shear strains (γ ≤ 10) under lower crustal conditions (800°C, 2 GPa confining pressure). Microstructural observations from optical microscopy and electron backscatter diffraction (EBSD) show evidence for relict grain elongation and alignment; an increase in calcite twin density; and grain size reduction concurrent with recrystallized grain nucleation. Microstructural observations are comparable to data obtained from previous studies at lower pressure (e.g. Barnhoorn et al., 2004), confirming that the LVT provides reliable microstructural results.

  12. Monte Carlo simulation of large electron fields.

    PubMed

    Faddegon, Bruce A; Perl, Joseph; Asai, Makoto

    2008-03-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.

  13. Monte Carlo simulation of large electron fields

    PubMed Central

    Faddegon, Bruce A; Perl, Joseph; Asai, Makoto

    2010-01-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different “physics lists,” were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the buildup region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy. PMID:18296775

  14. Monte Carlo simulation of large electron fields

    NASA Astrophysics Data System (ADS)

    Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto

    2008-03-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.

  15. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE PAGES

    Ebrahimi, F.; Raman, R.

    2016-03-23

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form amore » narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. Furthermore, these results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.« less

  16. Large eddy simulation of soot evolution in an aircraft combustor

    NASA Astrophysics Data System (ADS)

    Mueller, Michael E.; Pitsch, Heinz

    2013-11-01

    An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel

  17. Simulation of large systems with neural networks

    SciTech Connect

    Paez, T.L.

    1994-09-01

    Artificial neural networks (ANNs) have been shown capable of simulating the behavior of complex, nonlinear, systems, including structural systems. Under certain circumstances, it is desirable to simulate structures that are analyzed with the finite element method. For example, when we perform a probabilistic analysis with the Monte Carlo method, we usually perform numerous (hundreds or thousands of) repetitions of a response simulation with different input and system parameters to estimate the chance of specific response behaviors. In such applications, efficiency in computation of response is critical, and response simulation with ANNs can be valuable. However, finite element analyses of complex systems involve the use of models with tens or hundreds of thousands of degrees of freedom, and ANNs are practically limited to simulations that involve far fewer variables. This paper develops a technique for reducing the amount of information required to characterize the response of a general structure. We show how the reduced information can be used to train a recurrent ANN. Then the trained ANN can be used to simulate the reduced behavior of the original system, and the reduction transformation can be inverted to provide a simulation of the original system. A numerical example is presented.

  18. Large-scale mass distribution in the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Haider, M.; Steinhauser, D.; Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Hernquist, L.

    2016-04-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 per cent of the dark matter and 23 per cent of the baryons are within haloes more massive than the resolution limit of 2 × 108 M⊙. The filaments of the cosmic web host a further 45 per cent of the dark matter and 46 per cent of the baryons. The remaining 31 per cent of the baryons reside in voids. The majority of these baryons have been transported there through active galactic nuclei feedback. We note that the feedback model of Illustris is too strong for heavy haloes, therefore it is likely that we are overestimating this amount. Categorizing the baryons according to their density and temperature, we find that 17.8 per cent of them are in a condensed state, 21.6 per cent are present as cold, diffuse gas, and 53.9 per cent are found in the state of a warm-hot intergalactic medium.

  19. Floating substructure flexibility of large-volume 10MW offshore wind turbine platforms in dynamic calculations

    NASA Astrophysics Data System (ADS)

    Borg, Michael; Melchior Hansen, Anders; Bredmose, Henrik

    2016-09-01

    Designing floating substructures for the next generation of 10MW and larger wind turbines has introduced new challenges in capturing relevant physical effects in dynamic simulation tools. In achieving technically and economically optimal floating substructures, structural flexibility may increase to the extent that it becomes relevant to include in addition to the standard rigid body substructure modes which are typically described through linear radiation-diffraction theory. This paper describes a method for the inclusion of substructural flexibility in aero-hydro-servo-elastic dynamic simulations for large-volume substructures, including wave-structure interactions, to form the basis of deriving sectional loads and stresses within the substructure. The method is applied to a case study to illustrate the implementation and relevance. It is found that the flexible mode is significantly excited in an extreme event, indicating an increase in predicted substructure internal loads.

  20. Finite volume simulations of dynamos in ellipsoidal planets

    NASA Astrophysics Data System (ADS)

    Ernst-Hullermann, J.; Harder, H.; Hansen, U.

    2013-12-01

    So far, numerical simulations have mostly considered buoyancy as the driving mechanism of the dynamo process. However, also precession can drive a dynamo, as first suggested by Bullard in 1949. We investigate the properties of precession-driven dynamos in ellipsoidal planets by the use of a finite volume code. In planets, it is much more effective to drive a precessional flow by the pressure differences induced by the topography of the precessing body rather than by viscous coupling to the walls. Numerical simulations are the only method offering the possibility to investigate the influence of the topography since laboratory experiments normally are constrained by the predetermined geometry of the vessel. We discuss how ellipticity of the planets can be included in our simulations by the use of a non-orthogonal grid. Here, we will present some first results and conclude that laminar precession-driven flows can drive kinematic dynamos.

  1. Study of Hydrokinetic Turbine Arrays with Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Sale, Danny; Aliseda, Alberto

    2014-11-01

    Marine renewable energy is advancing towards commercialization, including electrical power generation from ocean, river, and tidal currents. The focus of this work is to develop numerical simulations capable of predicting the power generation potential of hydrokinetic turbine arrays-this includes analysis of unsteady and averaged flow fields, turbulence statistics, and unsteady loadings on turbine rotors and support structures due to interaction with rotor wakes and ambient turbulence. The governing equations of large-eddy-simulation (LES) are solved using a finite-volume method, and the presence of turbine blades are approximated by the actuator-line method in which hydrodynamic forces are projected to the flow field as a body force. The actuator-line approach captures helical wake formation including vortex shedding from individual blades, and the effects of drag and vorticity generation from the rough seabed surface are accounted for by wall-models. This LES framework was used to replicate a previous flume experiment consisting of three hydrokinetic turbines tested under various operating conditions and array layouts. Predictions of the power generation, velocity deficit and turbulence statistics in the wakes are compared between the LES and experimental datasets.

  2. Simulating failures on large-scale systems.

    SciTech Connect

    Desai, N.; Lusk, E.; Buettner, D.; Cherry, A.; Voran, T.; Univ. of Colorado

    2008-09-01

    Developing fault management mechanisms is a difficult task because of the unpredictable nature of failures. In this paper, we present a fault simulation framework for Blue Gene/P systems implemented as a part of the Cobalt resource manager. The primary goal of this framework is to support system software development. We also present a hardware diagnostic system that we have implemented using this framework.

  3. The 1980 Large space systems technology. Volume 2: Base technology

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    Technology pertinent to large antenna systems, technology related to large space platform systems, and base technology applicable to both antenna and platform systems are discussed. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. A total systems approach including controls, platforms, and antennas is presented as a cohesive, programmatic plan for large space systems.

  4. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.

  5. Performance of large electron energy filter in large volume plasma device

    NASA Astrophysics Data System (ADS)

    Singh, S. K.; Srivastava, P. K.; Awasthi, L. M.; Mattoo, S. K.; Sanyasi, A. K.; Singh, R.; Kaw, P. K.

    2014-03-01

    This paper describes an in-house designed large Electron Energy Filter (EEF) utilized in the Large Volume Plasma Device (LVPD) [S. K. Mattoo, V. P. Anita, L. M. Awasthi, and G. Ravi, Rev. Sci. Instrum. 72, 3864 (2001)] to secure objectives of (a) removing the presence of remnant primary ionizing energetic electrons and the non-thermal electrons, (b) introducing a radial gradient in plasma electron temperature without greatly affecting the radial profile of plasma density, and (c) providing a control on the scale length of gradient in electron temperature. A set of 19 independent coils of EEF make a variable aspect ratio, rectangular solenoid producing a magnetic field (Bx) of 100 G along its axis and transverse to the ambient axial field (Bz ˜ 6.2 G) of LVPD, when all its coils are used. Outside the EEF, magnetic field reduces rapidly to 1 G at a distance of 20 cm from the center of the solenoid on either side of target and source plasma. The EEF divides LVPD plasma into three distinct regions of source, EEF and target plasma. We report that the target plasma (ne ˜ 2 × 1011 cm-3 and Te ˜ 2 eV) has no detectable energetic electrons and the radial gradients in its electron temperature can be established with scale length between 50 and 600 cm by controlling EEF magnetic field. Our observations reveal that the role of the EEF magnetic field is manifested by the energy dependence of transverse electron transport and enhanced transport caused by the plasma turbulence in the EEF plasma.

  6. Climate Simulations with an Isentropic Finite Volume Dynamical Core

    SciTech Connect

    Chen, Chih-Chieh; Rasch, Philip J.

    2012-04-15

    This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite volume dynamical core of the Community Atmosphere Model (CAM). Results from a 20-year climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM which both use a pressure-based ({sigma}-p) coordinate. The same physical parameterization package is employed in all three dynamical cores. The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heat transport processes in the atmosphere leading to much improved atmospheric temperatures. We show that the isentropic model is very effective in reducing the long standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea-level pressure field in the northern hemisphere.

  7. High-order filtering for control volume flow simulation

    NASA Astrophysics Data System (ADS)

    de Stefano, G.; Denaro, F. M.; Riccardi, G.

    2001-12-01

    A general methodology is presented in order to obtain a hierarchy of high-order filter functions, starting from the standard top-hat filter, naturally linked to control volumes flow simulations. The goal is to have a new filtered variable better represented in its high resolved wavenumber components by using a suitable deconvolution. The proposed formulation is applied to the integral momentum equation, that is the evolution equation for the top-hat filtered variable, by performing a spatial reconstruction based on the approximate inversion of the averaging operator. A theoretical analysis for the Burgers' model equation is presented, demonstrating that the local de-averaging is an effective tool to obtain a higher-order accuracy. It is also shown that the subgrid-scale term, to be modeled in the deconvolved balance equation, has a smaller absolute importance in the resolved wavenumber range for increasing deconvolution order. A numerical analysis of the procedure is presented, based on high-order upwind and central fluxes reconstruction, leading to congruent control volume schemes. Finally, the features of the present high-order conservative formulation are tested in the numerical simulation of a sample turbulent flow: the flow behind a backward-facing step. Copyright

  8. Numerical simulation of the decay of swirling flow in a constant volume engine simulator

    SciTech Connect

    Cloutman, L.D.

    1986-05-01

    The KIVA and COYOTE computer programs were used to simulate the decay of turbulent swirling flow in a constant-volume combustion bomb. The results are in satisfactory agreement with the measurement of both swirl velocity and temperature. Predictions of secondary flows and suggestions for future research also are presented. 14 refs., 15 figs.

  9. Development of large volume double ring penning plasma discharge source for efficient light emissions.

    PubMed

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj; Prajapati, Jitendra; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana

    2012-12-01

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd(2)Fe(14)B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic field of ~0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is ~2 × 10(11) cm(-3), which is around one order of magnitude larger than that of single ring arrangement.

  10. An Ultrascalable Solution to Large-scale Neural Tissue Simulation

    PubMed Central

    Kozloski, James; Wagner, John

    2011-01-01

    Neural tissue simulation extends requirements and constraints of previous neuronal and neural circuit simulation methods, creating a tissue coordinate system. We have developed a novel tissue volume decomposition, and a hybrid branched cable equation solver. The decomposition divides the simulation into regular tissue blocks and distributes them on a parallel multithreaded machine. The solver computes neurons that have been divided arbitrarily across blocks. We demonstrate thread, strong, and weak scaling of our approach on a machine with more than 4000 nodes and up to four threads per node. Scaling synapses to physiological numbers had little effect on performance, since our decomposition approach generates synapses that are almost always computed locally. The largest simulation included in our scaling results comprised 1 million neurons, 1 billion compartments, and 10 billion conductance-based synapses and gap junctions. We discuss the implications of our ultrascalable Neural Tissue Simulator, and with our results estimate requirements for a simulation at the scale of a human brain. PMID:21954383

  11. Large Eddy Simulation of Crashback in Marine Propulsors

    NASA Astrophysics Data System (ADS)

    Jang, Hyunchul

    Crashback is an operating condition to quickly stop a propelled vehicle, where the propeller is rotated in the reverse direction to yield negative thrust. The crashback condition is dominated by the interaction of the free stream flow with the strong reverse flow. This interaction forms a highly unsteady vortex ring, which is a very prominent feature of crashback. Crashback causes highly unsteady loads and flow separation on the blade surface. The unsteady loads can cause propulsor blade damage, and also affect vehicle maneuverability. Crashback is therefore well known as one of the most challenging propeller states to analyze. This dissertation uses Large-Eddy Simulation (LES) to predict the highly unsteady flow field in crashback. A non-dissipative and robust finite volume method developed by Mahesh et al. (2004) for unstructured grids is applied to flow around marine propulsors. The LES equations are written in a rotating frame of reference. The objectives of this dissertation are: (1) to understand the flow physics of crashback in marine propulsors with and without a duct, (2) to develop a finite volume method for highly skewed meshes which usually occur in complex propulsor geometries, and (3) to develop a sliding interface method for simulations of rotor-stator propulsor on parallel platforms. LES is performed for an open propulsor in crashback and validated against experiments performed by Jessup et al. (2004). The LES results show good agreement with experiments. Effective pressures for thrust and side-force are introduced to more clearly understand the physical sources of thrust and side-force. Both thrust and side-force are seen to be mainly generated from the leading edge of the suction side of the propeller. This implies that thrust and side-force have the same source---the highly unsteady leading edge separation. Conditional averaging is performed to obtain quantitative information about the complex flow physics of high- or low-amplitude events. The

  12. Large-Scale Hybrid Dynamic Simulation Employing Field Measurements

    SciTech Connect

    Huang, Zhenyu; Guttromson, Ross T.; Hauer, John F.

    2004-06-30

    Simulation and measurements are two primary ways for power engineers to gain understanding of system behaviors and thus accomplish tasks in system planning and operation. Many well-developed simulation tools are available in today's market. On the other hand, large amount of measured data can be obtained from traditional SCADA systems and currently fast growing phasor networks. However, simulation and measurement are still two separate worlds. There is a need to combine the advantages of simulation and measurements. In view of this, this paper proposes the concept of hybrid dynamic simulation which opens up traditional simulation by providing entries for measurements. A method is presented to implement hybrid simulation with PSLF/PSDS. Test studies show the validity of the proposed hybrid simulation method. Applications of such hybrid simulation include system event playback, model validation, and software validation.

  13. Large-Eddy Simulation of Wind-Plant Aerodynamics

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.

  14. Bilateral anterolateral thigh flaps for large-volume breast reconstruction.

    PubMed

    Rosenberg, Jason J; Chandawarkar, Rajiv; Ross, Merrick I; Chevray, Pierre M

    2004-01-01

    Autologous tissue reconstruction of a large breast in patients who are not candidates for a TRAM flap is a difficult problem. We present a case report of the use of bilateral free anterolateral thigh (ALT) flaps for immediate reconstruction of a unilateral large breast in a patient who had a previous abdominoplasty. Use of ALT flaps allows two or three surgical teams to work simultaneously, does not require intraoperative patient repositioning, has minimal donor-site morbidity, and can provide ample malleable soft tissue for breast reconstruction. These are advantages compared to the use of gluteal donor sites. The disadvantages include more conspicuous donor-site scarring on the anterior thighs.

  15. A Distributed Data Implementation of the Perspective Shear-Warp Volume Rendering Algorithm for Visualisation of Large Astronomical Cubes

    NASA Astrophysics Data System (ADS)

    Beeson, Brett; Barnes, David G.; Bourke, Paul D.

    We describe the first distributed data implementation of the perspective shear-warp volume rendering algorithm and explore its applications to large astronomical data cubes and simulation realisations. Our system distributes sub-volumes of 3-dimensional images to leaf nodes of a Beowulf-class cluster, where the rendering takes place. Junction nodes composite the sub-volume renderings together and pass the combined images upwards for further compositing or display. We demonstrate that our system out-performs other software solutions and can render a `worst-case' 512 × 512 × 512 data volume in less than four seconds using 16 rendering and 15 compositing nodes. Our system also performs very well compared with much more expensive hardware systems. With appropriate commodity hardware, such as Swinburne's Virtual Reality Theatre or a 3Dlabs Wildcat graphics card, stereoscopic display is possible.

  16. Large space telescope, phase A. Volume 3: Optical telescope assembly

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the optical telescope assembly for the Large Space Telescope are discussed. The systems considerations are based on mission-related parameters and optical equipment requirements. Information is included on: (1) structural design and analysis, (2) thermal design, (3) stabilization and control, (4) alignment, focus, and figure control, (5) electronic subsystem, and (6) scientific instrument design.

  17. Large space telescope, phase A. Volume 5: Support systems module

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the support systems module for the Large Space Telescope are discussed. The following systems and described: (1) thermal control, (2) electrical, (3) communication and data landing, (4) attitude control system, and (5) structural features. Analyses of maintainability and reliability considerations are included.

  18. RADON DIAGNOSTIC MEASUREMENT GUIDANCE FOR LARGE BUILDINGS - VOLUME 2. APPENDICES

    EPA Science Inventory

    The report discusses the development of radon diagnostic procedures and mitigation strategies applicable to a variety of large non-residential buildings commonly found in Florida. The investigations document and evaluate the nature of radon occurrence and entry mechanisms for rad...

  19. Large space telescope, phase A. Volume 4: Scientific instrument package

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design and characteristics of the scientific instrument package for the Large Space Telescope are discussed. The subjects include: (1) general scientific objectives, (2) package system analysis, (3) scientific instrumentation, (4) imaging photoelectric sensors, (5) environmental considerations, and (6) reliability and maintainability.

  20. A finite volume model simulation for the Broughton Archipelago, Canada

    NASA Astrophysics Data System (ADS)

    Foreman, M. G. G.; Czajko, P.; Stucchi, D. J.; Guo, M.

    A finite volume circulation model is applied to the Broughton Archipelago region of British Columbia, Canada and used to simulate the three-dimensional velocity, temperature, and salinity fields that are required by a companion model for sea lice behaviour, development, and transport. The absence of a high resolution atmospheric model necessitated the installation of nine weather stations throughout the region and the development of a simple data assimilation technique that accounts for topographic steering in interpolating/extrapolating the measured winds to the entire model domain. The circulation model is run for the period of March 13-April 3, 2008 and correlation coefficients between observed and model currents, comparisons between model and observed tidal harmonics, and root mean square differences between observed and model temperatures and salinities all showed generally good agreement. The importance of wind forcing in the near-surface circulation, differences between this simulation and one computed with another model, the effects of bathymetric smoothing on channel velocities, further improvements necessary for this model to accurately simulate conditions in May and June, and the implication of near-surface current patterns at a critical location in the 'migration corridor' of wild juvenile salmon, are also discussed.

  1. Simulating Weak Lensing by Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vale, Chris; White, Martin

    2003-08-01

    We model weak gravitational lensing of light by large-scale structure using ray tracing through N-body simulations. The method is described with particular attention paid to numerical convergence. We investigate some of the key approximations in the multiplane ray-tracing algorithm. Our simulated shear and convergence maps are used to explore how well standard assumptions about weak lensing hold, especially near large peaks in the lensing signal.

  2. Special Properties of Coherence Scanning Interferometers for large Measurement Volumes

    NASA Astrophysics Data System (ADS)

    Bauer, W.

    2011-08-01

    In contrast to many other optical methods the uncertainty of Coherence Scanning Interferometer (CSI) in vertical direction is independent from the field of view. Therefore CSIs are ideal instruments for measuring 3D-profiles of larger areas (36×28mm2, e.g.) with high precision. This is of advantage for the determination of form parameters like flatness, parallelism and steps heights within a short time. In addition, using a telecentric beam path allows measurements of deep lying surfaces (<70mm) and the determination of form parameters with large step-heights. The lateral and spatial resolution, however, are reduced. In this presentation different metrological characteristics together with their potential errors are analyzed for large-scale measuring CSIs. Therefore these instruments are ideal tools in quality control for good/bad selections, e.g. The consequences for the practical use in industry and for standardization are discussed by examples of workpieces of automotive suppliers or from the steel industry.

  3. Simulation and Analysis of Large-Scale Compton Imaging Detectors

    SciTech Connect

    Manini, H A; Lange, D J; Wright, D M

    2006-12-27

    We perform simulations of two types of large-scale Compton imaging detectors. The first type uses silicon and germanium detector crystals, and the second type uses silicon and CdZnTe (CZT) detector crystals. The simulations use realistic detector geometry and parameters. We analyze the performance of each type of detector, and we present results using receiver operating characteristics (ROC) curves.

  4. Large-volume protein crystal growth for neutron macromolecular crystallography

    DOE PAGES

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for themore » growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.« less

  5. Large-volume protein crystal growth for neutron macromolecular crystallography

    SciTech Connect

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.

  6. Large-volume protein crystal growth for neutron macromolecular crystallography.

    PubMed

    Ng, Joseph D; Baird, James K; Coates, Leighton; Garcia-Ruiz, Juan M; Hodge, Teresa A; Huang, Sijay

    2015-04-01

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. These include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.

  7. Evaluation of isolator system and large-volume centrifugation method for culturing body fluids.

    PubMed Central

    Elston, H R; Wang, M; Philip, A

    1990-01-01

    The Isolator system was compared with the large-volume centrifugation method for processing and recovering organisms from body fluids other than blood, cerebrospinal fluid, and urine. A total of 155 body fluid samples were processed for the recovery of clinically significant organisms. Of the 55 positive cultures, Isolator detected 94% and the large-volume centrifugation method detected 64%. The time necessary to indicate positivity was not significantly different in the two methods; however, in five cases, the Isolator system yielded clinically significant organisms 24 h sooner than the conventional method. The Isolator system was found to be a more sensitive alternative than the conventional large-volume centrifugation method. PMID:2405006

  8. Evaluation of Large Volume SrI2(Eu) Scintillator Detectors

    SciTech Connect

    Sturm, B W; Cherepy, N J; Drury, O B; Thelin, P A; Fisher, S E; Magyar, A F; Payne, S A; Burger, A; Boatner, L A; Ramey, J O; Shah, K S; Hawrami, R

    2010-11-18

    There is an ever increasing demand for gamma-ray detectors which can achieve good energy resolution, high detection efficiency, and room-temperature operation. We are working to address each of these requirements through the development of large volume SrI{sub 2}(Eu) scintillator detectors. In this work, we have evaluated a variety of SrI{sub 2} crystals with volumes >10 cm{sup 3}. The goal of this research was to examine the causes of energy resolution degradation for larger detectors and to determine what can be done to mitigate these effects. Testing both packaged and unpackaged detectors, we have consistently achieved better resolution with the packaged detectors. Using a collimated gamma-ray source, it was determined that better energy resolution for the packaged detectors is correlated with better light collection uniformity. A number of packaged detectors were fabricated and tested and the best spectroscopic performance was achieved for a 3% Eu doped crystal with an energy resolution of 2.93% FWHM at 662keV. Simulations of SrI{sub 2}(Eu) crystals were also performed to better understand the light transport physics in scintillators and are reported. This study has important implications for the development of SrI{sub 2}(Eu) detectors for national security purposes.

  9. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  10. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  11. [INFLUENCE OF LIPOSUCTION OF LARGE VOLUME ON SYSTEMIC AND LUNG CIRCULATION, OXIGENATED LUNG FUNCTION].

    PubMed

    Nikolaeva, I P; Kapranova, A S; Popova, V B; Lodyagin, A N; Frolova, T A

    2015-01-01

    The authors measured the changes of hemodynamics in 72 patients. It was also estimated a blood oxygenation and volume of liquid sectors of the organism in different degree of obesity before and after liposuction of the large volume. It was shown, that this operation facilitated to an improvement of respiratory lung function due to changes of pulmonary circulation.

  12. Modelling and simulation of large solid state laser systems

    SciTech Connect

    Simmons, W.W.; Warren, W.E.

    1986-01-01

    The role of numerical methods to simulate the several physical processes (e.g., diffraction, self-focusing, gain saturation) that are involved in coherent beam propagation through large laser systems is discussed. A comprehensive simulation code for modeling the pertinent physical phenomena observed in laser operations (growth of small-scale modulation, spatial filter, imaging, gain saturation and beam-induced damage) is described in some detail. Comparisons between code results and solid state laser output performance data are presented. Design and performance estimation of the large Nova laser system at LLNL are given. Finally, a global design rule for large, solid state laser systems is discussed.

  13. Sand tank experiment of a large volume biodiesel spill

    NASA Astrophysics Data System (ADS)

    Scully, K.; Mayer, K. U.

    2015-12-01

    Although petroleum hydrocarbon releases in the subsurface have been well studied, the impacts of subsurface releases of highly degradable alternative fuels, including biodiesel, are not as well understood. One concern is the generation of CH4­ which may lead to explosive conditions in underground structures. In addition, the biodegradation of biodiesel consumes O2 that would otherwise be available for the degradation of petroleum hydrocarbons that may be present at a site. Until now, biodiesel biodegradation in the vadose zone has not been examined in detail, despite being critical to understanding the full impact of a release. This research involves a detailed study of a laboratory release of 80 L of biodiesel applied at surface into a large sandtank to examine the progress of biodegradation reactions. The experiment will monitor the onset and temporal evolution of CH4 generation to provide guidance for site monitoring needs following a biodiesel release to the subsurface. Three CO2 and CH4 flux chambers have been deployed for long term monitoring of gas emissions. CO2 fluxes have increased in all chambers over the 126 days since the start of the experiment. The highest CO2 effluxes are found directly above the spill and have increased from < 0.5 μmol m-2 s-1 to ~3.8 μmol m-2 s-1, indicating an increase in microbial activity. There were no measurable CH4 fluxes 126 days into the experiment. Sensors were emplaced to continuously measure O2, CO2, moisture content, matric potential, EC, and temperature. In response to the release, CO2 levels have increased across all sensors, from an average value of 0.1% to 0.6% 126 days after the start of the experiment, indicating the rapid onset of biodegradation. The highest CO2 values observed from samples taken in the gas ports were 2.5%. Average O2 concentrations have decreased from 21% to 17% 126 days after the start of the experiment. O2 levels in the bottom central region of the sandtank declined to approximately 12%.

  14. Mathematical simulation of power conditioning systems. Volume 1: Simulation of elementary units. Report on simulation methodology

    NASA Technical Reports Server (NTRS)

    Prajous, R.; Mazankine, J.; Ippolito, J. C.

    1978-01-01

    Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.

  15. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-10-20

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048{sup 3} dark matter particles, 2048{sup 3} gas cells, and 17 billion adaptive rays in a L = 100 Mpc h {sup –1} box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h {sup –1}). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h {sup –1}) in order to make mock observations and theoretical predictions.

  16. New material model for simulating large impacts on rocky bodies

    NASA Astrophysics Data System (ADS)

    Tonge, A.; Barnouin, O.; Ramesh, K.

    2014-07-01

    Large impact craters on an asteroid can provide insights into its internal structure. These craters can expose material from the interior of the body at the impact site [e.g., 1]; additionally, the impact sends stress waves throughout the body, which interrogate the asteroid's interior. Through a complex interplay of processes, such impacts can result in a variety of motions, the consequence of which may appear as lineaments that are exposed over all or portions of the asteroid's surface [e.g., 2,3]. While analytic, scaling, and heuristic arguments can provide some insight into general phenomena on asteroids, interpreting the results of a specific impact event, or series of events, on a specific asteroid geometry generally necessitates the use of computational approaches that can solve for the stress and displacement history resulting from an impact event. These computational approaches require a constitutive model for the material, which relates the deformation history of a small material volume to the average force on the boundary of that material volume. In this work, we present a new material model that is suitable for simulating the failure of rocky materials during impact events. This material model is similar to the model discussed in [4]. The new material model incorporates dynamic sub-scale crack interactions through a micro-mechanics-based damage model, thermodynamic effects through the use of a Mie-Gruneisen equation of state, and granular flow of the fully damaged material. The granular flow model includes dilatation resulting from the mutual interaction of small fragments of material (grains) as they are forced to slide and roll over each other and includes a P-α type porosity model to account for compaction of the granular material in a subsequent impact event. The micro-mechanics-based damage model provides a direct connection between the flaw (crack) distribution in the material and the rate-dependent strength. By connecting the rate

  17. Kinetic MHD simulation of large 'circ; tearing mode

    NASA Astrophysics Data System (ADS)

    Cheng, Jianhua; Chen, Yang; Parker, Scott; Uzdensky, Dmitri

    2012-03-01

    We have developed a second-order accurate semi-implicit δ method for kinetic MHD simulation with Lorentz force ions and fluid electrons. The model has been used to study the resistive tearing mode instability, which involves multiple spatial scales. In small 'circ; cases, the linear growth rate and eigenmode structure are consistent with resistive MHD analysis. The Rutherford stage and saturation are demonstrated, but the simulation exhibits different saturation island widths compared with previous MHD simulations. In large 'circ; cases, nonlinear simulations show multiple islands forming, followed by the islands coalescing at later times. The competition between these two processes strongly influences the reconnection rates and eventually leads to a steady state reconnection. We will present various parameter studies and show that our hybrid results agree with fluid analysis in certain limits (e.g., relatively large resisitivities).

  18. Plasma volume losses during simulated weightlessness in women

    SciTech Connect

    Drew, H.; Fortney, S.; La France, N.; Wagner, H.N. Jr.

    1985-05-01

    Six healthy women not using oral contraceptives underwent two 11-day intervals of complete bedrest (BR) with the BR periods separated by 4 weeks of ambulatory control. Change in plasma volume (PV) was monitored during BR to test the hypothesis that these women would show a smaller decrease in PV than PV values reported in similarly stressed men due to the water retaining effects of the female hormones. Bedrest periods were timed to coincide with opposing stages of the menstrual cycle in each woman. The menstrual cycle was divided into 4 separate stages; early follicular, ovulatory, early luteal, and late luteal phases. The percent decrease of PV showed a consistent decrease for each who began BR while in stage 1, 3 or 4 of the menstrual cycle. However, the females who began in stage 2 showed a transient attenuation in PV loss. Overall, PV changes seen in women during BR were similar to those reported for men. The water-retaining effects of menstrual hormones were evident only during the high estrogen ovulatory stage. The authors conclude the protective effects of menstrual hormones on PV losses during simulated weightless conditions appear to be only small and transient.

  19. Simulation of SMC compression molding: Filling, curing, and volume changes

    SciTech Connect

    Hill, R.R. Jr.

    1992-01-01

    Sheet molding compound (SMC) is a composite material made from polyester resin, styrene, fiberglass reinforcement, and other additives. It is widely recognized that SMC is a good candidate for replacing sheet metals of automotive body exteriors because SMC is relatively inexpensive, has a high strength-to-density ratio, and has good corrosion resistance. The focus of this research was to develop computer models to simulate the important features of SMC compression molding (i.e., material flow, heat transfer, curing, material expansion, and shrinkage), and to characterize these features experimentally. A control volume/finite element approach was used to obtain the pressure and velocity fields and to compute the flow progression during compression mold filling. The energy equation and a kinetic model were solved simultaneously for the temperature and conversion profiles. A series of molding experiments was conducted to record the flow-front location and material temperature. Predictions obtained from the model were compared to experimental results which incorporated a non-isothermal temperature profile, and reasonable agreement was obtained.

  20. Controlled multibody dynamics simulation for large space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Wu, S. C.; Chang, C. W.

    1989-01-01

    Multibody dynamics discipline, and dynamic simulation in control structure interaction (CSI) design are discussed. The use, capabilities, and architecture of the Large Angle Transient Dynamics (LATDYN) code as a simulation tool are explained. A generic joint body with various types of hinge connections; finite element and element coordinate systems; results of a flexible beam spin-up on a plane; mini-mast deployment; space crane and robotic slewing manipulations; a potential CSI test article; and multibody benchmark experiments are also described.

  1. Parallel simulation of multiphase flows using octree adaptivity and the volume-of-fluid method

    NASA Astrophysics Data System (ADS)

    Agbaglah, Gilou; Delaux, Sébastien; Fuster, Daniel; Hoepffner, Jérôme; Josserand, Christophe; Popinet, Stéphane; Ray, Pascal; Scardovelli, Ruben; Zaleski, Stéphane

    2011-02-01

    We describe computations performed using the Gerris code, an open-source software implementing finite volume solvers on an octree adaptive grid together with a piecewise linear volume of fluid interface tracking method. The parallelisation of Gerris is achieved by domain decomposition. We show examples of the capabilities of Gerris on several types of problems. The impact of a droplet on a layer of the same liquid results in the formation of a thin air layer trapped between the droplet and the liquid layer that the adaptive refinement allows to capture. It is followed by the jetting of a thin corolla emerging from below the impacting droplet. The jet atomisation problem is another extremely challenging computational problem, in which a large number of small scales are generated. Finally we show an example of a turbulent jet computation in an equivalent resolution of 6×1024 cells. The jet simulation is based on the configuration of the Deepwater Horizon oil leak.

  2. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  3. Applications of large eddy simulation methods to gyrokinetic turbulence

    SciTech Connect

    Bañón Navarro, A. Happel, T.; Teaca, B. [Applied Mathematics Research Centre, Coventry University, Coventry CV1 5FB; Max-Planck für Sonnensystemforschung, Max-Planck-Str. 2, D-37191 Katlenburg-Lindau; Max-Planck Jenko, F. [Max-Planck-Institut für Plasmaphysik, EURATOM Association, D-85748 Garching; Max-Planck Hammett, G. W. [Max-Planck Collaboration: ASDEX Upgrade Team

    2014-03-15

    The large eddy simulation (LES) approach—solving numerically the large scales of a turbulent system and accounting for the small-scale influence through a model—is applied to nonlinear gyrokinetic systems that are driven by a number of different microinstabilities. Comparisons between modeled, lower resolution, and higher resolution simulations are performed for an experimental measurable quantity, the electron density fluctuation spectrum. Moreover, the validation and applicability of LES is demonstrated through a series of diagnostics based on the free energetics of the system.

  4. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  5. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  6. Large Eddy Simulation of Multiple Turbulent Round Jets

    NASA Astrophysics Data System (ADS)

    Balajee, G. K.; Panchapakesan, Nagangudy

    2015-11-01

    Turbulent round jet flow was simulated as a large eddy simulation with OpenFoam software package for a jet Reynolds number of 11000. The intensity of the fluctuating motion in the incoming nozzle flow was adjusted so that the initial shear layer development compares well with available experimental data. The far field development of averages of higher order moments up to fourth order were compared with experiments. The agreement is good indicating that the large eddy motions were being computed satisfactorily by the simulation. Turbulent kinetic energy budget as well as the quality of the LES simulations were also evaluated. These conditions were then used to perform a multiple turbulent round jets simulation with the same initial momentum flux. The far field of the flow was compared with the single jet simulation and experiments to test approach to self similarity. The evolution of the higher order moments in the development region where the multiple jets interact were studied. We will also present FTLE fields computed from the simulation to educe structures and compare it with those educed by other scalar measures. Support of AR&DB CIFAAR, and VIRGO cluster at IIT Madras is gratefully acknowledged.

  7. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  8. Necessary conditions on Calabi-Yau manifolds for large volume vacua

    NASA Astrophysics Data System (ADS)

    Gray, James; He, Yang-Hui; Jejjala, Vishnu; Jurke, Benjamin; Nelson, Brent; Simón, Joan

    2012-11-01

    We describe an efficient, construction independent, algorithmic test to determine whether Calabi-Yau threefolds admit a structure compatible with the large volume moduli stabilization scenario of type IIB superstring theory. Using the algorithm, we scan complete intersection and toric hypersurface Calabi-Yau threefolds with 2≤h1,1≤4 and deduce that 418 among 4434 manifolds have a large volume limit with a single large four-cycle. We describe major extensions to this survey, which are currently underway.

  9. Science and engineering of large scale socio-technical simulations.

    SciTech Connect

    Barrett, C. L.; Eubank, S. G.; Marathe, M. V.; Mortveit, H. S.; Reidys, C. M.

    2001-01-01

    Computer simulation is a computational approach whereby global system properties are produced as dynamics by direct computation of interactions among representations of local system elements. A mathematical theory of simulation consists of an account of the formal properties of sequential evaluation and composition of interdependent local mappings. When certain local mappings and their interdependencies can be related to particular real world objects and interdependencies, it is common to compute the interactions to derive a symbolic model of the global system made up of the corresponding interdependent objects. The formal mathematical and computational account of the simulation provides a particular kind of theoretical explanation of the global system properties and, therefore, insight into how to engineer a complex system to exhibit those properties. This paper considers the methematical foundations and engineering princaples necessary for building large scale simulations of socio-technical systems. Examples of such systems are urban regional transportation systems, the national electrical power markets and grids, the world-wide Internet, vaccine design and deployment, theater war, etc. These systems are composed of large numbers of interacting human, physical and technological components. Some components adapt and learn, exhibit perception, interpretation, reasoning, deception, cooperation and noncooperation, and have economic motives as well as the usual physical properties of interaction. The systems themselves are large and the behavior of sociotechnical systems is tremendously complex. The state of affairs f o r these kinds of systems is characterized by very little satisfactory formal theory, a good decal of very specialized knowledge of subsystems, and a dependence on experience-based practitioners' art. However, these systems are vital and require policy, control, design, implementation and investment. Thus there is motivation to improve the ability to

  10. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Aluminum in large and small volume parenterals... for Specific Drug Products § 201.323 Aluminum in large and small volume parenterals used in total parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in...

  11. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 4 2014-04-01 2014-04-01 false Aluminum in large and small volume parenterals... for Specific Drug Products § 201.323 Aluminum in large and small volume parenterals used in total parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in...

  12. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 4 2013-04-01 2013-04-01 false Aluminum in large and small volume parenterals... for Specific Drug Products § 201.323 Aluminum in large and small volume parenterals used in total parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in...

  13. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 4 2012-04-01 2012-04-01 false Aluminum in large and small volume parenterals... for Specific Drug Products § 201.323 Aluminum in large and small volume parenterals used in total parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in...

  14. 21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 4 2011-04-01 2011-04-01 false Aluminum in large and small volume parenterals... for Specific Drug Products § 201.323 Aluminum in large and small volume parenterals used in total parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in...

  15. Toward the large-eddy simulation of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Speziale, C. G.; Zang, T. A.

    1992-01-01

    New subgrid-scale models for the large-eddy simulation of compressible turbulent flows are developed and tested based on the Favre-filtered equations of motion for an ideal gas. A compressible generalization of the linear combination of the Smagorinsky model and scale-similarity model, in terms of Favre-filtered fields, is obtained for the subgrid-scale stress tensor. An analogous thermal linear combination model is also developed for the subgrid-scale heat flux vector. The two dimensionless constants associated with these subgrid-scale models are obtained by correlating with the results of direct numerical simulations of compressible isotropic turbulence performed on a 96 (exp 3) grid using Fourier collocation methods. Extensive comparisons between the direct and modeled subgrid-scale fields are provided in order to validate the models. A large-eddy simulation of the decay of compressible isotropic turbulence (conducted on a coarse 32(exp 3) grid) is shown to yield results that are in excellent agreement with the fine-grid direct simulation. Future applications of these compressible subgrid-scale models to the large-eddy simulation of more complex supersonic flows are discussed briefly.

  16. Toward the large-eddy simulation of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Speziale, C. G.; Zang, T. A.

    1990-01-01

    New subgrid-scale models for the large-eddy simulation of compressible turbulent flows are developed and tested based on the Favre-filtered equations of motion for an ideal gas. A compressible generalization of the linear combination of the Smagorinsky model and scale-similarity model, in terms of Favre-filtered fields, is obtained for the subgrid-scale stress tensor. An analogous thermal linear combination model is also developed for the subgrid-scale heat flux vector. The two dimensionless constants associated with these subgrid-scale models are obtained by correlating with the results of direct numerical simulations of compressible isotropic turbulence performed on a 96(exp 3) grid using Fourier collocation methods. Extensive comparisons between the direct and modeled subgrid-scale fields are provided in order to validate the models. A large-eddy simulation of the decay of compressible isotropic turbulence (conducted on a coarse 32(exp 3) grid) is shown to yield results that are in excellent agreement with the fine grid direct simulation. Future applications of these compressible subgrid-scale models to the large-eddy simulation of more complex supersonic flows are discussed briefly.

  17. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  18. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Technical Reports Server (NTRS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-01-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an

  19. Double-Auction Market Simulation Software for Very Large Classes

    ERIC Educational Resources Information Center

    Ironside, Brian; Joerding, Wayne; Kuzyk, Pat

    2004-01-01

    The authors provide a version of a double-auction market simulation designed for classes too large for most computer labs to accommodate in one sitting. Instead, students play the game from remote computers, wherever they may be and at any time during a given time period specified by the instructor. When the window of time expires, students can…

  20. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  1. Toward the large-eddy simulations of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Speziale, C. G.; Zang, T. A.

    1987-01-01

    New subgrid-scale models for the large-eddy simulation of compressible turbulent flows are developed based on the Favre-filtered equations of motion for an ideal gas. A compressible generalization of the linear combination of the Smagorinsky model and scale-similarity model (in terms of Favre-filtered fields) is obtained for the subgrid-scale stress tensor. An analogous thermal linear combination model is also developed for the subgrid-scale heat flux vector. The three dimensionless constants associated with these subgrid-scale models are obtained by correlating with the results of direct numerical simulations of compressible isotropic turbulence performed on a 96 to the third power grid using Fourier collocation methods. Extensive comparisons between the direct and modeled subgrid-scale fields are provided in order to validate the models. Future applications of these compressible subgrid-scale models to the large-eddy simulation of supersonic aerodynamic flows are discussed briefly.

  2. Two-dimensional simulations of extreme floods on a large watershed

    NASA Astrophysics Data System (ADS)

    England, John F.; Velleux, Mark L.; Julien, Pierre Y.

    2007-12-01

    SummaryWe investigate the applicability of the Two-dimensional, Runoff, Erosion and Export (TREX) model to simulate extreme floods on large watersheds in semi-arid regions in the western United States. Spatially-distributed extreme storm and channel components are implemented so that the TREX model can be applied to this problem. TREX is demonstrated via calibration, validation and simulation of extreme storms and floods on the 12,000 km 2 Arkansas River watershed above Pueblo, Colorado. The model accurately simulates peak, volume and time to peak for the record June 1921 extreme flood calibration and a May 1894 flood validation. A Probable Maximum Precipitation design storm is used to apply the calibrated model. The distributed model TREX captures the effects of spatial and temporal variability of extreme storms for dam safety purposes on large watersheds, and is an alternative to unit-hydrograph rainfall-runoff models.

  3. Evaluation of Cloud, Grid and HPC resources for big volume and variety of RCM simulations

    NASA Astrophysics Data System (ADS)

    Blanco, Carlos; Cofino, Antonio S.; Fernández, Valvanuz; Fernández, Jesús

    2016-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Regional Climate Model (RCM) community. These paradigms are modifying the way how RCM applications are being executed. By using these technologies the number, variety and complexity of experiments and resources used by RCMs simulations are increasing substantially. But, although computational capacity is increasing, traditional apps and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to execute RCMs in Grid, Cloud and HPC resources and how to tackle them. For this purpose, WRF model will be used as well known representative application for RCM simulations. Grid and Cloud infrastructures provided by EGI's VOs (esr, earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. And as a solution to those challenges we will use the WRF4G framework, which provides a good framework to manage big volume and variety of computing resources for climate simulation experiments. This work is partially funded by "Programa de Personal Investigador en Formación Predoctoral" from Universidad de Cantabria, co-funded by the Regional Government of Cantabria.

  4. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  5. An immunomagnetic separator for concentration of pathogenic micro-organisms from large volume samples

    NASA Astrophysics Data System (ADS)

    Rotariu, Ovidiu; Ogden, Iain D.; MacRae, Marion; Bădescu, Vasile; Strachan, Norval J. C.

    2005-05-01

    The standard method of immunomagnetic separation of pathogenic bacteria from food and environmental matrices processes 1 ml volumes. Pathogens present at low levels (<1 pathogenic bacteria per ml) will not be consistently detected by this method. Here a flow through immunomagnetic separator (FTIMS) has been designed and tested to process large volume samples (>50 ml). Preliminary results show that between 70 and 113 times more Escherchia coli O157 are recovered compared with the standard 1 ml method.

  6. A large-volume microwave plasma source based on parallel rectangular waveguides at low pressures

    NASA Astrophysics Data System (ADS)

    Zhang, Qing; Zhang, Guixin; Wang, Shumin; Wang, Liming

    2011-02-01

    A large-volume microwave plasma with good stability, uniformity and high density is directly generated and sustained. A microwave cavity is assembled by upper and lower metal plates and two adjacently parallel rectangular waveguides with axial slots regularly positioned on their inner wide side. Microwave energy is coupled into the plasma chamber shaped by quartz glass to enclose the space of working gas at low pressures. The geometrical properties of the source and the existing modes of the electric field are determined and optimized by a numerical simulation without a plasma. The calculated field patterns are in agreement with the observed experimental results. Argon, helium, nitrogen and air are used to produce a plasma for pressures ranging from 1000 to 2000 Pa and microwave powers above 800 W. The electron density is measured with a Mach-Zehnder interferometer to be on the order of 1014 cm-3 and the electron temperature is obtained using atomic emission spectrometry to be in the range 2222-2264 K at a pressure of 2000 Pa at different microwave powers. It can be seen from the interferograms at different microwave powers that the distribution of the plasma electron density is stable and uniform.

  7. Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach

    NASA Technical Reports Server (NTRS)

    Menon, S.; Feiz, H.

    1990-01-01

    Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.

  8. Challenges and strategies in the preparation of large-volume polymer-based monolithic chromatography adsorbents.

    PubMed

    Ongkudon, Clarence M; Kansil, Tamar; Wong, Charlotte

    2014-03-01

    To date, the number of published reports on the large-volume preparation of polymer-based monolithic chromatography adsorbents is still lacking and is of great importance. Many critical factors need to be considered when manufacturing a large-volume polymer-based monolith for chromatographic applications. Structural integrity, validity, and repeatability are thought to be the key factors determining the usability of a large-volume monolith in a separation process. In this review, we focus on problems and solutions pertaining to heat dissipation, pore size distribution, "wall channel" effect, and mechanical strength in monolith preparation. A template-based method comprising sacrificial and nonsacrificial techniques is possibly the method of choice due to its precise control over the porous structure. However, additional expensive steps are usually required for the template removal. Other strategies in monolith preparation are also discussed.

  9. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  10. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  11. Finecasting for renewable energy with large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Jonker, Harmen; Verzijlbergh, Remco

    2016-04-01

    We present results of a single, continuous Large-Eddy Simulation of actual weather conditions during the timespan of a full year, made possible through recent computational developments (Schalkwijk et al, MWR, 2015). The simulation is coupled to a regional weather model in order to provide an LES dataset that is representative of the daily weather of the year 2012 around Cabauw, the Netherlands. This location is chosen such that LES results can be compared with both the regional weather model and observations from the Cabauw observational supersite. The run was made possible by porting our Large-Eddy Simulation program to run completely on the GPU (Schalkwijk et al, BAMS, 2012). GPU adaptation allows us to reach much improved time-to-solution ratios (i.e. simulation speedup versus real time). As a result, one can perform runs with a much longer timespan than previously feasible. The dataset resulting from the LES run provides many avenues for further study. First, it can provide a more statistical approach to boundary-layer turbulence than the more common case-studies by simulating a diverse but representative set of situations, as well as the transition between situations. This has advantages in designing and evaluating parameterizations. In addition, we discuss the opportunities of high-resolution forecasts for the renewable energy sector, e.g. wind and solar energy production.

  12. Domain nesting for multi-scale large eddy simulation

    NASA Astrophysics Data System (ADS)

    Fuka, Vladimir; Xie, Zheng-Tong

    2016-04-01

    The need to simulate city scale areas (O(10 km)) with high resolution within street canyons in certain areas of interests necessitates different grid resolutions in different part of the simulated area. General purpose computational fluid dynamics codes typically employ unstructured refined grids while mesoscale meteorological models more often employ nesting of computational domains. ELMM is a large eddy simulation model for the atmospheric boundary layer. It employs orthogonal uniform grids and for this reason domain nesting was chosen as the approach for simulations in multiple scales. Domains are implemented as sets of MPI processes which communicate with each other as in a normal non-nested run, but also with processes from another (outer/inner) domain. It should stressed that the duration of solution of time-steps in the outer and in the inner domain must be synchronized, so that the processes do not have to wait for the completion of their boundary conditions. This can achieved by assigning an appropriate number of CPUs to each domain, and to gain high efficiency. When nesting is applied for large eddy simulation, the inner domain receives inflow boundary conditions which lack turbulent motions not represented by the outer grid. ELMM remedies this by optional adding of turbulent fluctuations to the inflow using the efficient method of Xie and Castro (2008). The spatial scale of these fluctuations is in the subgrid-scale of the outer grid and their intensity will be estimated from the subgrid turbulent kinetic energy in the outer grid.

  13. Publicly Releasing a Large Simulation Dataset with NDS Labs

    NASA Astrophysics Data System (ADS)

    Goldbaum, Nathan

    2016-03-01

    Optimally, all publicly funded research should be accompanied by the tools, code, and data necessary to fully reproduce the analysis performed in journal articles describing the research. This ideal can be difficult to attain, particularly when dealing with large (>10 TB) simulation datasets. In this lightning talk, we describe the process of publicly releasing a large simulation dataset to accompany the submission of a journal article. The simulation was performed using Enzo, an open source, community-developed N-body/hydrodynamics code and was analyzed using a wide range of community- developed tools in the scientific Python ecosystem. Although the simulation was performed and analyzed using an ecosystem of sustainably developed tools, we enable sustainable science using our data by making it publicly available. Combining the data release with the NDS Labs infrastructure allows a substantial amount of added value, including web-based access to analysis and visualization using the yt analysis package through an IPython notebook interface. In addition, we are able to accompany the paper submission to the arXiv preprint server with links to the raw simulation data as well as interactive real-time data visualizations that readers can explore on their own or share with colleagues during journal club discussions. It is our hope that the value added by these services will substantially increase the impact and readership of the paper.

  14. Multivariate Clustering of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  15. Multivariate Clustering of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-03-04

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatiotemporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial space is important since 'similar' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying the threshold f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building a cluster, it is desirable to associate each cluster with its correct spatial space. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  16. Toward large eddy simulation of turbulent flow over an airfoil

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon

    1993-01-01

    The flow field over an airfoil contains several distinct flow characteristics, e.g. laminar, transitional, turbulent boundary layer flow, flow separation, unstable free shear layers, and a wake. This diversity of flow regimes taxes the presently available Reynolds averaged turbulence models. Such models are generally tuned to predict a particular flow regime, and adjustments are necessary for the prediction of a different flow regime. Similar difficulties are likely to emerge when the large eddy simulation technique is applied with the widely used Smagorinsky model. This model has not been successful in correctly representing different turbulent flow fields with a single universal constant and has an incorrect near-wall behavior. Germano et al. (1991) and Ghosal, Lund & Moin have developed a new subgrid-scale model, the dynamic model, which is very promising in alleviating many of the persistent inadequacies of the Smagorinsky model: the model coefficient is computed dynamically as the calculation progresses rather than input a priori. The model has been remarkably successful in prediction of several turbulent and transitional flows. We plan to simulate turbulent flow over a '2D' airfoil using the large eddy simulation technique. Our primary objective is to assess the performance of the newly developed dynamic subgrid-scale model for computation of complex flows about aircraft components and to compare the results with those obtained using the Reynolds average approach and experiments. The present computation represents the first application of large eddy simulation to a flow of aeronautical interest and a key demonstration of the capabilities of the large eddy simulation technique.

  17. Large-eddy simulation of trans- and supercritical injection

    NASA Astrophysics Data System (ADS)

    Müller, H.; Niedermeier, C. A.; Jarczyk, M.; Pfitzner, M.; Hickel, S.; Adams, N. A.

    2016-07-01

    In a joint effort to develop a robust numerical tool for the simulation of injection, mixing, and combustion in liquid rocket engines at high pressure, a real-gas thermodynamics model has been implemented into two computational fluid dynamics (CFD) codes, the density-based INCA and a pressure-based version of OpenFOAM. As a part of the validation process, both codes have been used to perform large-eddy simulations (LES) of trans- and supercritical nitrogen injection. Despite the different code architecture and the different subgrid scale turbulence modeling strategy, both codes yield similar results. The agreement with the available experimental data is good.

  18. Refurbishment of the Jet Propulsion Laboratory's Large Space Simulator

    NASA Technical Reports Server (NTRS)

    Harrell, J.; Johnson, K.

    1993-01-01

    The JPL large space simulator has recently undergone a major refurbishment to restore and enhance its capabilities to provide high fidelity space simulation. The nearly completed refurbishment has included upgrading the vacuum pumping system by replacing old oil diffusion pumps with new cryogenic and turbomolecular pumps; modernizing the entire control system to utilize computerized, distributed control technology; replacing the Xenon arc lamp power supplies with new upgraded units; refinishing the primary collimating mirror; and replacing the existing integrating lens unit and the fused quartz penetration window.

  19. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, G.D.

    1998-11-24

    Microwave injection methods are disclosed for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant ``volume`` ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources. 5 figs.

  20. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, Gerald D.

    1998-01-01

    Microwave injection methods for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant "volume" ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources.

  1. Large-volume en-bloc staining for electron microscopy-based connectomics

    PubMed Central

    Hua, Yunfeng; Laserstein, Philip; Helmstaedter, Moritz

    2015-01-01

    Large-scale connectomics requires dense staining of neuronal tissue blocks for electron microscopy (EM). Here we report a large-volume dense en-bloc EM staining protocol that overcomes the staining gradients, which so far substantially limited the reconstructable volumes in three-dimensional (3D) EM. Our protocol provides densely reconstructable tissue blocks from mouse neocortex sized at least 1 mm in diameter. By relaxing the constraints on precise topographic sample targeting, it makes the correlated functional and structural analysis of neuronal circuits realistic. PMID:26235643

  2. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  3. Lifetime of metastable states in a Ginzburg-Landau system: Numerical simulations at large driving forces.

    PubMed

    Umantsev, A

    2016-04-01

    We developed a "brute-force" simulation method and conducted numerical "experiments" on homogeneous nucleation in an isotropic system at large driving forces (not small supersaturations) using the stochastic Ginzburg-Landau approach. Interactions in the system are described by the asymmetric (no external field), athermal (temperature-independent driving force), tangential (simple phase diagram) Hamiltonian, which has two independent "drivers" of the phase transition: supersaturation and thermal noise. We obtained the probability distribution function of the lifetime of the metastable state and analyzed its mean value as a function of the supersaturation, noise strength, and volume. We also proved the nucleation theorem in the mean-field approximation. The results allowed us to find the thermodynamic properties of the barrier state and conclude that at large driving forces the fluctuating volumes are not independent. PMID:27176373

  4. Large-eddy simulation of sand dune morphodynamics

    NASA Astrophysics Data System (ADS)

    Khosronejad, Ali; Sotiropoulos, Fotis; St. Anthony Falls Laboratory, University of Minnesota Team

    2015-11-01

    Sand dunes are natural features that form under complex interaction between turbulent flow and bed morphodynamics. We employ a fully-coupled 3D numerical model (Khosronejad and Sotiropoulos, 2014, Journal of Fluid Mechanics, 753:150-216) to perform high-resolution large-eddy simulations of turbulence and bed morphodynamics in a laboratory scale mobile-bed channel to investigate initiation, evolution and quasi-equilibrium of sand dunes (Venditti and Church, 2005, J. Geophysical Research, 110:F01009). We employ a curvilinear immersed boundary method along with convection-diffusion and bed-morphodynamics modules to simulate the suspended sediment and the bed-load transports respectively. The coupled simulation were carried out on a grid with more than 100 million grid nodes and simulated about 3 hours of physical time of dune evolution. The simulations provide the first complete description of sand dune formation and long-term evolution. The geometric characteristics of the simulated dunes are shown to be in excellent agreement with observed data obtained across a broad range of scales. This work was supported by NSF Grants EAR-0120914 (as part of the National Center for Earth-Surface Dynamics). Computational resources were provided by the University of Minnesota Supercomputing Institute.

  5. Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses

    NASA Astrophysics Data System (ADS)

    Rutkowska, Eva; Baker, Colin; Nahum, Alan

    2010-04-01

    A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.

  6. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    NASA Technical Reports Server (NTRS)

    Oefelein, Joseph C.; Garcia, Roberto (Technical Monitor)

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  7. Large-eddy simulation using the finite element method

    SciTech Connect

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.; Kollmann, W.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulence modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.

  8. Large-scale numerical simulation of rotationally constrained convection

    NASA Astrophysics Data System (ADS)

    Sprague, Michael; Julien, Keith; Knobloch, Edgar; Werne, Joseph; Weiss, Jeffrey

    2007-11-01

    Using direct numerical simulation (DNS), we investigate solutions of an asymptotically reduced system of nonlinear PDEs for rotationally constrained convection. The reduced equations filter fast inertial waves and relax the need to resolve Ekman boundary layers, which allow exploration of a parameter range inaccessible with DNS of the full Boussinesq equations. The equations are applicable to ocean deep convection, which is characterized by small Rossby number and large Rayleigh number. Previous numerical studies of the reduced equations examined upright convection where the gravity vector was anti-parallel to the rotation vector. In addition to the columnar and geostrophic-turbulence regimes, simulations revealed a third regime where Taylor columns were shielded by sleeves of opposite-signed vorticity. We here extend our numerical simulations to examine both upright and tilted convection at high Rayleigh numbers.

  9. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  10. Large Eddy Simulations of Severe Convection Induced Turbulence

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.

  11. Lightweight computational steering of very large scale molecular dynamics simulations

    SciTech Connect

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

  12. Rapid estimate of solid volume in large tuff cores using a gas pycnometer

    SciTech Connect

    Thies, C.; Geddis, A.M.; Guzman, A.G.

    1996-09-01

    A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1{degrees}C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm{sup 3} with less than 1% error.

  13. Large-Eddy Simulations of Dust Devils and Convective Vortices

    NASA Astrophysics Data System (ADS)

    Spiga, Aymeric; Barth, Erika; Gu, Zhaolin; Hoffmann, Fabian; Ito, Junshi; Jemmett-Smith, Bradley; Klose, Martina; Nishizawa, Seiya; Raasch, Siegfried; Rafkin, Scot; Takemi, Tetsuya; Tyler, Daniel; Wei, Wei

    2016-09-01

    In this review, we address the use of numerical computations called Large-Eddy Simulations (LES) to study dust devils, and the more general class of atmospheric phenomena they belong to (convective vortices). We describe the main elements of the LES methodology. We review the properties, statistics, and variability of dust devils and convective vortices resolved by LES in both terrestrial and Martian environments. The current challenges faced by modelers using LES for dust devils are also discussed in detail.

  14. Simulation requirements for the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Soosaar, K.

    1984-01-01

    Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.

  15. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  16. Mechanically Cooled Large-Volume Germanium Detector Systems for Nuclear Explosion Monitoring DOENA27323-1

    SciTech Connect

    Hull, E.L.

    2006-07-28

    Compact maintenance free mechanical cooling systems are being developed to operate large volume germanium detectors for field applications. To accomplish this we are utilizing a newly available generation of Stirling-cycle mechanical coolers to operate the very largest volume germanium detectors with no maintenance. The user will be able to leave these systems unplugged on the shelf until needed. The flip of a switch will bring a system to life in ~ 1 hour for measurements. The maintenance-free operating lifetime of these detector systems will exceed 5 years. These features are necessary for remote long-duration liquid-nitrogen free deployment of large-volume germanium gamma-ray detector systems for Nuclear Explosion Monitoring. The Radionuclide Aerosol Sampler/Analyzer (RASA) will greatly benefit from the availability of such detectors by eliminating the need for liquid nitrogen at RASA sites while still allowing the very largest available germanium detectors to be reliably utilized.

  17. Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modeling

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-02-01

    Measuring the volume weighted velocity power spectrum suffers from a severe systematic error due to imperfect sampling of the velocity field from the inhomogeneous distribution of dark matter particles/halos in simulations or galaxies with velocity measurement. This "sampling artifact" depends on both the mean particle number density n¯P and the intrinsic large scale structure (LSS) fluctuation in the particle distribution. (1) We report robust detection of this sampling artifact in N -body simulations. It causes ˜12 % underestimation of the velocity power spectrum at k =0.1 h /Mpc for samples with n¯ P=6 ×10-3 (Mpc /h )-3 . This systematic underestimation increases with decreasing n¯P and increasing k . Its dependence on the intrinsic LSS fluctuations is also robustly detected. (2) All of these findings are expected based upon our theoretical modeling in paper I [P. Zhang, Y. Zheng, and Y. Jing, Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling, arXiv:1405.7125.]. In particular, the leading order theoretical approximation agrees quantitatively well with the simulation result for n¯ P≳6 ×10-4 (Mpc /h )-3 . Furthermore, we provide an ansatz to take high order terms into account. It improves the model accuracy to ≲1 % at k ≲0.1 h /Mpc over 3 orders of magnitude in n¯P and over typical LSS clustering from z =0 to z =2 . (3) The sampling artifact is determined by the deflection D field, which is straightforwardly available in both simulations and data of galaxy velocity. Hence the sampling artifact in the velocity power spectrum measurement can be self-calibrated within our framework. By applying such self-calibration in simulations, it is promising to determine the real large scale velocity bias of 1013M⊙ halos with ˜1 % accuracy, and that of lower mass halos with better accuracy. (4) In contrast to suppressing the velocity power spectrum at large scale, the sampling artifact causes an overestimation of the velocity

  18. Earthquake Clustering and Triggering of Large Events in Simulated Catalogs

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Dieterich, J. H.; Richards-Dinger, K. B.; Xu, H.

    2013-12-01

    We investigate large event clusters (e.g. earthquake doublets and triplets) wherein secondary events in a cluster are triggered by stress transfer from previous events. We employ the 3D boundary element code RSQSim with a California fault model to generate synthetic catalogs spanning from tens of thousands up to a million years. The simulations incorporate rate-state fault constitutive properties, and the catalogs include foreshocks, aftershocks and occasional clusters of large events. Here we define a large event cluster as two or more M≥7 events within a few years. Most clustered events are closely grouped in space as well as time. Large event clusters show highly productive aftershock sequences where the aftershock locations of the first event in a cluster appear to correlate with the location of the next large event in the cluster. We find that the aftershock productivity of the first events in large event clusters is roughly double that of the unrelated, non-clustered events and that aftershock rate is a proxy for the stress state of the faults. The aftershocks of the first event in a large-event cluster migrate toward the point of nucleation of the next event in a large-event cluster. Furthermore, following a normal aftershock sequence, the average event rate increases prior to the second event in a large-event cluster. These increased event rates prior to the second event in a cluster follow an inverse Omori's law, which is characteristic of foreshocks. Clustering probabilities based on aftershock rates are higher than expected from Omori aftershock and Gutenberg-Richter magnitude frequency laws, which suggests that the high aftershock rates indicate near-critical stresses for failure in a large earthquake.

  19. Development of a Solid Phase Extraction Method for Agricultural Pesticides in Large-Volume Water Samples

    EPA Science Inventory

    An analytical method using solid phase extraction (SPE) and analysis by gas chromatography/mass spectrometry (GC/MS) was developed for the trace determination of a variety of agricultural pesticides and selected transformation products in large-volume high-elevation lake water sa...

  20. Alternatives for Reorganizing Large Urban Unified School Districts. Volume 2: Appendixes.

    ERIC Educational Resources Information Center

    Little (Arthur D.), Inc., Cambridge, MA.

    This second volume of the report to California State Legislature's Joint Committee on Reorganization of Large Urban Unified School Districts includes the results of the several discreet research tasks carried out in the course of the study. It comprises the data base from which most of the conclusions and recommendations are derived. (For complete…

  1. A New Electropositive Filter for Concentrating Enterovirus and Norovirus from Large Volumes of Water - MCEARD

    EPA Science Inventory

    The detection of enteric viruses in environmental water usually requires the concentration of viruses from large volumes of water. The 1MDS electropositive filter is commonly used for concentrating enteric viruses from water but unfortunately these filters are not cost-effective...

  2. Large eddy simulation and its implementation in the COMMIX code.

    SciTech Connect

    Sun, J.; Yu, D.-H.

    1999-02-15

    Large eddy simulation (LES) is a numerical simulation method for turbulent flows and is derived by spatial averaging of the Navier-Stokes equations. In contrast with the Reynolds-averaged Navier-Stokes equations (RANS) method, LES is capable of calculating transient turbulent flows with greater accuracy. Application of LES to differing flows has given very encouraging results, as reported in the literature. In recent years, a dynamic LES model that presented even better results was proposed and applied to several flows. This report reviews the LES method and its implementation in the COMMIX code, which was developed at Argonne National Laboratory. As an example of the application of LES, the flow around a square prism is simulated, and some numerical results are presented. These results include a three-dimensional simulation that uses a code developed by one of the authors at the University of Notre Dame, and a two-dimensional simulation that uses the COMMIX code. The numerical results are compared with experimental data from the literature and are found to be in very good agreement.

  3. Mesoscale and Large-Eddy Simulations for Wind Energy

    SciTech Connect

    Marjanovic, N

    2011-02-22

    Operational wind power forecasting, turbine micrositing, and turbine design require high-resolution simulations of atmospheric flow over complex terrain. The use of both Reynolds-Averaged Navier Stokes (RANS) and large-eddy (LES) simulations is explored for wind energy applications using the Weather Research and Forecasting (WRF) model. To adequately resolve terrain and turbulence in the atmospheric boundary layer, grid nesting is used to refine the grid from mesoscale to finer scales. This paper examines the performance of the grid nesting configuration, turbulence closures, and resolution (up to as fine as 100 m horizontal spacing) for simulations of synoptically and locally driven wind ramping events at a West Coast North American wind farm. Interestingly, little improvement is found when using higher resolution simulations or better resolved turbulence closures in comparison to observation data available for this particular site. This is true for week-long simulations as well, where finer resolution runs show only small changes in the distribution of wind speeds or turbulence intensities. It appears that the relatively simple topography of this site is adequately resolved by all model grids (even as coarse as 2.7 km) so that all resolutions are able to model the physics at similar accuracy. The accuracy of the results is shown in this paper to be more dependent on the parameterization of the land-surface characteristics such as soil moisture rather than on grid resolution.

  4. Shuttle mission simulator baseline definition report, volume 1

    NASA Technical Reports Server (NTRS)

    Burke, J. F.; Small, D. E.

    1973-01-01

    A baseline definition of the space shuttle mission simulator is presented. The subjects discussed are: (1) physical arrangement of the complete simulator system in the appropriate facility, with a definition of the required facility modifications, (2) functional descriptions of all hardware units, including the operational features, data demands, and facility interfaces, (3) hardware features necessary to integrate the items into a baseline simulator system to include the rationale for selecting the chosen implementation, and (4) operating, maintenance, and configuration updating characteristics of the simulator hardware.

  5. Large eddy simulation of a high aspect ratio combustor

    NASA Astrophysics Data System (ADS)

    Kirtas, Mehmet

    The present research investigates the details of mixture preparation and combustion in a two-stroke, small-scale research engine with a numerical methodology based on large eddy simulation (LES) technique. A major motivation to study such small-scale engines is their potential use in applications requiring portable power sources with high power density. The investigated research engine has a rectangular planform with a thickness very close to quenching limits of typical hydrocarbon fuels. As such, the combustor has a high aspect ratio (defined as the ratio of surface area to volume) that makes it different than the conventional engines which typically have small aspect ratios to avoid intense heat losses from the combustor in the bulk flame propagation period. In most other aspects, this engine involves all the main characteristics of traditional reciprocating engines. A previous experimental work has identified some major design problems and demonstrated the feasibility of cyclic combustion in the high aspect ratio combustor. Because of the difficulty of carrying out experimental studies in such small devices, resolving all flow structures and completely characterizing the flame propagation have been an enormously challenging task. The numerical methodology developed in this work attempts to complement these previous studies by providing a complete evolution of flow variables. Results of the present study demonstrated strengths of the proposed methodology in revealing physical processes occuring in a typical operation of the high aspect ratio combustor. For example, in the scavenging phase, the dominant flow structure is a tumble vortex that forms due to the high velocity reactant jet (premixed) interacting with the walls of the combustor. Since the scavenging phase is a long process (about three quarters of the whole cycle), the impact of the vortex is substantial on mixture preparation for the next combustion phase. LES gives the complete evolution of this flow

  6. Evaluation of Bacillus oleronius as a Biological Indicator for Terminal Sterilization of Large-Volume Parenterals.

    PubMed

    Izumi, Masamitsu; Fujifuru, Masato; Okada, Aki; Takai, Katsuya; Takahashi, Kazuhiro; Udagawa, Takeshi; Miyake, Makoto; Naruyama, Shintaro; Tokuda, Hiroshi; Nishioka, Goro; Yoden, Hikaru; Aoki, Mitsuo

    2016-01-01

    In the production of large-volume parenterals in Japan, equipment and devices such as tanks, pipework, and filters used in production processes are exhaustively cleaned and sterilized, and the cleanliness of water for injection, drug materials, packaging materials, and manufacturing areas is well controlled. In this environment, the bioburden is relatively low, and less heat resistant compared with microorganisms frequently used as biological indicators such as Geobacillus stearothermophilus (ATCC 7953) and Bacillus subtilis 5230 (ATCC 35021). Consequently, the majority of large-volume parenteral solutions in Japan are manufactured under low-heat sterilization conditions of F0 <2 min, so that loss of clarity of solutions and formation of degradation products of constituents are minimized. Bacillus oleronius (ATCC 700005) is listed as a biological indicator in "Guidance on the Manufacture of Sterile Pharmaceutical Products Produced by Terminal Sterilization" (guidance in Japan, issued in 2012). In this study, we investigated whether B. oleronius is an appropriate biological indicator of the efficacy of low-heat, moist-heat sterilization of large-volume parenterals. Specifically, we investigated the spore-forming ability of this microorganism in various cultivation media and measured the D-values and z-values as parameters of heat resistance. The D-values and z-values changed depending on the constituents of large-volume parenteral products. Also, the spores from B. oleronius showed a moist-heat resistance that was similar to or greater than many of the spore-forming organisms isolated from Japanese parenteral manufacturing processes. Taken together, these results indicate that B. oleronius is suitable as a biological indicator for sterility assurance of large-volume parenteral solutions subjected to low-heat, moist-heat terminal sterilization. PMID:26889054

  7. Evaluation of Bacillus oleronius as a Biological Indicator for Terminal Sterilization of Large-Volume Parenterals.

    PubMed

    Izumi, Masamitsu; Fujifuru, Masato; Okada, Aki; Takai, Katsuya; Takahashi, Kazuhiro; Udagawa, Takeshi; Miyake, Makoto; Naruyama, Shintaro; Tokuda, Hiroshi; Nishioka, Goro; Yoden, Hikaru; Aoki, Mitsuo

    2016-01-01

    In the production of large-volume parenterals in Japan, equipment and devices such as tanks, pipework, and filters used in production processes are exhaustively cleaned and sterilized, and the cleanliness of water for injection, drug materials, packaging materials, and manufacturing areas is well controlled. In this environment, the bioburden is relatively low, and less heat resistant compared with microorganisms frequently used as biological indicators such as Geobacillus stearothermophilus (ATCC 7953) and Bacillus subtilis 5230 (ATCC 35021). Consequently, the majority of large-volume parenteral solutions in Japan are manufactured under low-heat sterilization conditions of F0 <2 min, so that loss of clarity of solutions and formation of degradation products of constituents are minimized. Bacillus oleronius (ATCC 700005) is listed as a biological indicator in "Guidance on the Manufacture of Sterile Pharmaceutical Products Produced by Terminal Sterilization" (guidance in Japan, issued in 2012). In this study, we investigated whether B. oleronius is an appropriate biological indicator of the efficacy of low-heat, moist-heat sterilization of large-volume parenterals. Specifically, we investigated the spore-forming ability of this microorganism in various cultivation media and measured the D-values and z-values as parameters of heat resistance. The D-values and z-values changed depending on the constituents of large-volume parenteral products. Also, the spores from B. oleronius showed a moist-heat resistance that was similar to or greater than many of the spore-forming organisms isolated from Japanese parenteral manufacturing processes. Taken together, these results indicate that B. oleronius is suitable as a biological indicator for sterility assurance of large-volume parenteral solutions subjected to low-heat, moist-heat terminal sterilization.

  8. Exposing earth surface process model simulations to a large audience

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical

  9. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  10. The terminal area simulation system. Volume 2: Verification cases

    NASA Technical Reports Server (NTRS)

    Proctor, F. H.

    1987-01-01

    The numerical simulation of five case studies are presented and are compared with available data in order to verify the three-dimensional version of the Terminal Area Simulation System (TASS). A spectrum of convective storm types are selected for the case studies. Included are: a High-Plains supercell hailstorm, a small and relatively short-lived High-Plains cumulonimbus, a convective storm which produced the 2 August 1985 DFW microburst, a South Florida convective complex, and a tornadic Oklahoma thunderstorm. For each of the cases the model results compared reasonably well with observed data. In the simulations of the supercell storms many of their characteristic features were modeled, such as the hook echo, BWER, mesocyclone, gust fronts, giant persistent updraft, wall cloud, flanking-line towers, anvil and radar reflectivity overhang, and rightward veering in the storm propagation. In the simulation of the tornadic storm a horseshoe-shaped updraft configuration and cyclic changes in storm intensity and structure were noted. The simulation of the DFW microburst agreed remarkably well with sparse observed data. The simulated outflow rapidly expanded in a nearly symmetrical pattern and was associated with a ringvortex. A South Florida convective complex was simulated and contained updrafts and downdrafts in the form of discrete bubbles. The numerical simulations, in all cases, always remained stable and bounded with no anomalous trends.

  11. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    SciTech Connect

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-07-28

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  12. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    PubMed Central

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-01-01

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys.141, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys.141, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent. PMID:25084887

  13. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-07-01

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2-3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  14. BASIC Simulation Programs; Volumes III and IV. Mathematics, Physics.

    ERIC Educational Resources Information Center

    Digital Equipment Corp., Maynard, MA.

    The computer programs presented here were developed as a part of the Huntington Computer Project. They were tested on a Digital Equipment Corporation TSS-8 time-shared computer and run in a version of BASIC. Mathematics and physics programs are presented in this volume. The 20 mathematics programs include ones which review multiplication skills;…

  15. The large volume radiometric calorimeter system: A transportable device to measure scrap category plutonium

    SciTech Connect

    Duff, M.F.; Wetzel, J.R.; Breakall, K.L.; Lemming, J.F.

    1987-01-01

    An innovative design concept has been used to design a large volume calorimeter system. The new design permits two measuring cells to fit in a compact, nonevaporative environmental bath. The system is mounted on a cart for transportability. Samples in the power range of 0.50 to 12.0 W can be measured. The calorimeters will receive samples as large as 22.0 cm in diameter by 43.2 cm high, and smaller samples can be measured without lengthening measurement time or increasing measurement error by using specially designed sleeve adapters. This paper describes the design considerations, construction, theory, applications, and performance of the large volume calorimeter system. 2 refs., 5 figs., 1 tab.

  16. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  17. Large-scale lattice-Boltzmann simulations over lambda networks

    NASA Astrophysics Data System (ADS)

    Saksena, R.; Coveney, P. V.; Pinning, R.; Booth, S.

    Amphiphilic molecules are of immense industrial importance, mainly due to their tendency to align at interfaces in a solution of immiscible species, e.g., oil and water, thereby reducing surface tension. Depending on the concentration of amphiphiles in the solution, they may assemble into a variety of morphologies, such as lamellae, micelles, sponge and cubic bicontinuous structures exhibiting non-trivial rheological properties. The main objective of this work is to study the rheological properties of very large, defect-containing gyroidal systems (of up to 10243 lattice sites) using the lattice-Boltzmann method. Memory requirements for the simulation of such large lattices exceed that available to us on most supercomputers and so we use MPICH-G2/MPIg to investigate geographically distributed domain decomposition simulations across HPCx in the UK and TeraGrid in the US. Use of MPICH-G2/MPIg requires the port-forwarder to work with the grid middleware on HPCx. Data from the simulations is streamed to a high performance visualisation resource at UCL (London) for rendering and visualisation. Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project March 26-28 2007 The George Hotel, Edinburgh, UK

  18. Statistical Modeling of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T; Abdulla, G

    2002-02-22

    With the advent of fast computer systems, Scientists are now able to generate terabytes of simulation data. Unfortunately, the shear size of these data sets has made efficient exploration of them impossible. To aid scientists in gathering knowledge from their simulation data, we have developed an ad-hoc query infrastructure. Our system, called AQSim (short for Ad-hoc Queries for Simulation) reduces the data storage requirements and access times in two stages. First, it creates and stores mathematical and statistical models of the data. Second, it evaluates queries on the models of the data instead of on the entire data set. In this paper, we present two simple but highly effective statistical modeling techniques for simulation data. Our first modeling technique computes the true mean of systematic partitions of the data. It makes no assumptions about the distribution of the data and uses a variant of the root mean square error to evaluate a model. In our second statistical modeling technique, we use the Andersen-Darling goodness-of-fit method on systematic partitions of the data. This second method evaluates a model by how well it passes the normality test on the data. Both of our statistical models summarize the data so as to answer range queries in the most effective way. We calculate precision on an answer to a query by scaling the one-sided Chebyshev Inequalities with the original mesh's topology. Our experimental evaluations on two scientific simulation data sets illustrate the value of using these statistical modeling techniques on large simulation data sets.

  19. Large meteoroid's impact damage: review of available impact hazard simulators

    NASA Astrophysics Data System (ADS)

    Moreno-Ibáñez, M.; Gritsevich, M.; Trigo-Rodríguez, J. M.

    2016-01-01

    The damage caused by meter-sized meteoroids encountering the Earth is expected to be severe. Meteor-sized objects in heliocentric orbits can release energies higher than 108 J either in the upper atmosphere through an energetic airblast or, if reaching the surface, their impact may create a crater, provoke an earthquake or start up a tsunami. A limited variety of cases has been observed in the recent past (e.g. Tunguska, Carancas or Chelyabinsk). Hence, our knowledge has to be constrained with the help of theoretical studies and numerical simulations. There are several simulation programs which aim to forecast the impact consequences of such events. We have tested them using the recent case of the Chelyabinsk superbolide. Particularly, Chelyabinsk belongs to the ten to hundred meter-sized objects which constitute the main source of risk to Earth given the current difficulty in detecting them in advance. Furthermore, it was a detailed documented case, thus allowing us to properly check the accuracy of the studied simulators. As we present, these open simulators provide a first approximation of the impact consequences. However, all of them fail to accurately determine the caused damage. We explain the observed discrepancies between the observed and simulated consequences with the following consideration. The large amount of unknown properties of the potential impacting meteoroid, the atmospheric conditions, the flight dynamics and the uncertainty in the impact point itself hinder any modelling task. This difficulty can be partially overcome by reducing the number of unknowns using dimensional analysis and scaling laws. Despite the description of physical processes associated with atmospheric entry could be still further improved, we conclude that such approach would significantly improve the efficiency of the simulators.

  20. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes.

    PubMed

    Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T

    2006-01-21

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.

  1. Large breast compressions: Observations and evaluation of simulations

    SciTech Connect

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J.

    2011-02-15

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  2. WEST-3 wind turbine simulator development. Volume 2: Verification

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    The details of a study to validate WEST-3, a new time wind turbine simulator developed by Paragib Pacific Inc., are presented in this report. For the validation, the MOD-0 wind turbine was simulated on WEST-3. The simulation results were compared with those obtained from previous MOD-0 simulations, and with test data measured during MOD-0 operations. The study was successful in achieving the major objective of proving that WEST-3 yields results which can be used to support a wind turbine development process. The blade bending moments, peak and cyclic, from the WEST-3 simulation correlated reasonably well with the available MOD-0 data. The simulation was also able to predict the resonance phenomena observed during MOD-0 operations. Also presented in the report is a description and solution of a serious numerical instability problem encountered during the study. The problem was caused by the coupling of the rotor and the power train models. The results of the study indicate that some parts of the existing WEST-3 simulation model may have to be refined for future work; specifically, the aerodynamics and procedure used to couple the rotor model with the tower and the power train models.

  3. Pulsar Simulations for the Fermi Large Area Telescope

    NASA Technical Reports Server (NTRS)

    Razzano, M.; Harding, A. K.; Baldini, L.; Bellazzini, R.; Bregeon, J.; Burnett, T.; Chiang, J.; Digel, S. W.; Dubois, R.; Kuss, M. W.; Latronico, L.; McEnery, J. E.; Omodei, N.; Pesce-Rollins, M.; Sgro, C.; Spandre, G.; Thompson, D. J.

    2009-01-01

    Pulsars are among the prime targets for the Large Area Telescope (LAT) aboard the recently launched Fermi observatory. The LAT will study the gamma-ray Universe between 20 MeV and 300 GeV with unprecedented detail. Increasing numbers of gamma-ray pulsars are being firmly identified, yet their emission mechanisms are far from being understood. To better investigate and exploit the tAT capabilities for pulsar science. a set of new detailed pulsar simulation tools have been developed within the LAT collaboration. The structure of the pulsar simulator package (PulsarSpeccrum) is presented here. Starting from photon distributions in energy and phase obtained from theoretical calculations or phenomenological considerations, gamma-rays are generated and their arrival times at the spacecraft are determined by taking Into account effects such as barycentric effects and timing noise. Pulsars in binary systems also can be simulated given orbital parameters. We present how simulations can be used for generating a realistic set of gamma rays as observed by the LAT, focusing on some case studies that show the performance of the LAT for pulsar observations.

  4. Cryogenic Linear Ion Trap for Large-Scale Quantum Simulations

    NASA Astrophysics Data System (ADS)

    Pagano, Guido; Hess, Paul; Kaplan, Harvey; Birckelbaw, Eric; Hernanez, Micah; Lee, Aaron; Smith, Jake; Zhang, Jiehang; Monroe, Christopher

    2016-05-01

    Ions confined in RF Paul traps are a useful tool for quantum simulation of long-range spin-spin interaction models. As the system size increases, classical simulation methods become incapable of modeling the exponentially growing Hilbert space, necessitating quantum simulation for precise predictions. Current experiments are limited to less than 30 qubits due to collisions with background gas that regularly destroys the ion crystal. We present progress toward the construction of a cryogenic ion trap apparatus, which uses differential cryopumping to reduce vacuum pressure to a level where collisions do not occur. This should allow robust trapping of about 100 ions/qubits in a single chain with long lifetimes. Such a long chain will provide a platform to investigate simultaneously cooling of various vibrational modes and will enable quantum simulations that outperform their classical counterpart. Our apparatus will provide a powerful test-bed to investigate a large variety of Hamiltonians, including spin 1 and spin 1/2 systems with Ising or XY interactions. This work is supported by the ARO Atomic Physics Program, the AFOSR MURI on Quantum Measurement and Verification, the IC Fellowship Program and the NSF Physics Frontier Center at JQI.

  5. Shuttle mission simulator requirements report, volume 1, revision C

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The contractor tasks required to produce a shuttle mission simulator for training crew members and ground personnel are discussed. The tasks will consist of the design, development, production, installation, checkout, and field support of a simulator with two separate crew stations. The tasks include the following: (1) review of spacecraft changes and incorporation of appropriate changes in simulator hardware and software design, and (2) the generation of documentation of design, configuration management, and training used by maintenance and instructor personnel after acceptance for each of the crew stations.

  6. Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.; Bergman, R. O.; Bonstrom, D. B.; Brinkman, T. W.; Chiu, S. H. J.; Green, S. S.; Hansen, S. D.; Klein, D. L.; Krohn, H. E.; Prow, R. P.

    1979-01-01

    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation.

  7. Large-eddy simulation of turbulent circular jet flows

    SciTech Connect

    Jones, S. C.; Sotiropoulos, F.; Sale, M. J.

    2002-07-01

    This report presents a numerical method for carrying out large-eddy simulations (LES) of turbulent free shear flows and an application of a method to simulate the flow generated by a nozzle discharging into a stagnant reservoir. The objective of the study was to elucidate the complex features of the instantaneous flow field to help interpret the results of recent biological experiments in which live fish were exposed to the jet shear zone. The fish-jet experiments were conducted at the Pacific Northwest National Laboratory (PNNL) under the auspices of the U.S. Department of Energy’s Advanced Hydropower Turbine Systems program. The experiments were designed to establish critical thresholds of shear and turbulence-induced loads to guide the development of innovative, fish-friendly hydropower turbine designs.

  8. Large-Eddy Simulation of Turbulent Wall-Pressure Fluctuations

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.

    1996-01-01

    Large-eddy simulations of a turbulent boundary layer with Reynolds number based on displacement thickness equal to 3500 were performed with two grid resolutions. The computations were continued for sufficient time to obtain frequency spectra with resolved frequencies that correspond to the most important structural frequencies on an aircraft fuselage. The turbulent stresses were adequately resolved with both resolutions. Detailed quantitative analysis of a variety of statistical quantities associated with the wall-pressure fluctuations revealed similar behavior for both simulations. The primary differences were associated with the lack of resolution of the high-frequency data in the coarse-grid calculation and the increased jitter (due to the lack of multiple realizations for averaging purposes) in the fine-grid calculation. A new curve fit was introduced to represent the spanwise coherence of the cross-spectral density.

  9. Coalescent simulation in continuous space: algorithms for large neighbourhood size.

    PubMed

    Kelleher, J; Etheridge, A M; Barton, N H

    2014-08-01

    Many species have an essentially continuous distribution in space, in which there are no natural divisions between randomly mating subpopulations. Yet, the standard approach to modelling these populations is to impose an arbitrary grid of demes, adjusting deme sizes and migration rates in an attempt to capture the important features of the population. Such indirect methods are required because of the failure of the classical models of isolation by distance, which have been shown to have major technical flaws. A recently introduced model of extinction and recolonisation in two dimensions solves these technical problems, and provides a rigorous technical foundation for the study of populations evolving in a spatial continuum. The coalescent process for this model is simply stated, but direct simulation is very inefficient for large neighbourhood sizes. We present efficient and exact algorithms to simulate this coalescent process for arbitrary sample sizes and numbers of loci, and analyse these algorithms in detail. PMID:24910324

  10. Large Eddy Simulation of a Cavitating Multiphase Flow for Liquid Injection

    NASA Astrophysics Data System (ADS)

    Cailloux, M.; Helie, J.; Reveillon, J.; Demoulin, F. X.

    2015-12-01

    This paper presents a numerical method for modelling a compressible multiphase flow that involves phase transition between liquid and vapour in the context of gasoline injection. A discontinuous compressible two fluid mixture based on the Volume of Fluid (VOF) implementation is employed to represent the phases of liquid, vapour and air. The mass transfer between phases is modelled by standard models such as Kunz or Schnerr-Sauer but including the presence of air in the gas phase. Turbulence is modelled using a Large Eddy Simulation (LES) approach to catch instationnarities and coherent structures. Eventually the modelling approach matches favourably experimental data concerning the effect of cavitation on atomisation process.

  11. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  12. Large-scale kinetic simulation of the magnetosphere

    NASA Astrophysics Data System (ADS)

    Palmroth, Minna; Hoilijoki, Sanni; Pfau-Kempf, Yann; Hietala, Heli; Nishimura, Toshi; Angelopoulos, Vassilis; Pulkkinen, Tuija; Ganse, Urs; von Alfthan, Sebastian; Vainio, Rami

    2016-04-01

    Vlasiator is a newly developed, global hybrid-Vlasov simulation, which solves the six-dimensional phase space utilising the Vlasov equation for protons, while electrons are a charge-neutralising fluid. The outcome of the simulation is a global reproduction of ion-scale physics. Vlasiator produces the ion distribution functions and the related kinetic physics in unprecedented detail, in the global scale magnetospheric scale with the resolution required by kinetic physics. Here, we review the recent progress made in the Vlasiator development, highlight newest physical findings, and look forward to future challenges by presenting our upcoming new project awarded by the European Research Council. Specifically, we investigate the dayside-nightside coupling of the magnetospheric dynamics. Here, we run Vlasiator in the 5-dimensional (5D) setup, where the ordinary space is presented in the 2D noon-midnight meridional plane, embedding in each grid cell the 3D velocity space. The simulation is during steady southward interplanetary magnetic field. We observe dayside reconnection and the resulting 2D representations of flux transfer events (FTE). In the nightside, the plasma sheet first shows slight density enhancements moving slowly earthward. Second, the tailward side of the dipolar field stretches. Strong reconnection initiates first in the near-Earth region, forming a tailward-moving magnetic island that cannibalises other islands forming further down the tail, increasing the island's volume and complexity. After this, several reconnection lines are formed again in the near-Earth region, resulting in several magnetic islands. We investigate this substorm process holistically as a result of dayside-nightside coupling. In particular, we concentrate on the role of the FTE's in the magnetospheric dynamics.

  13. Large Eddy Simulation of Vertical Axis Wind Turbine Wakes

    NASA Astrophysics Data System (ADS)

    Shamsoddin, Sina; Porté-Agel, Fernando

    2014-05-01

    In this study, large-eddy simulation (LES) is combined with a turbine model to investigate the wake behind a vertical-axis wind turbine (VAWT) in a three dimensional turbulent flow. Two methods are used to model the subgrid-scale (SGS) stresses: (a) the Smagorinsky model, and (b) the modulated gradient model. To parameterize the effects of the VAWT on the flow, two VAWT models are developed: (a) the actuator surface model (ASM), in which the time-averaged turbine-induced forces are distributed on a surface swept by the turbine blades, i.e. the actuator surface, and (b) the actuator line model (ALM), in which the instantaneous blade forces are only spatially distributed on lines representing the blades, i.e. the actuator lines. This is the first time that LES is applied and validated for simulation of VAWT wakes by using either the ASM or the ALM techniques. In both models, blade-element theory is used to calculate the lift and drag forces on the blades. The results are compared with flow measurements in the wake of a model straight-bladed VAWT, carried out in the Institute de Méchanique et Statistique de la Turbulence (IMST) water channel. Different combinations of SGS models with VAWT models are studied and a fairly good overall agreement between simulation results and measurement data is observed. In general, the ALM is found to better capture the unsteady-periodic nature of the wake and shows a better agreement with the experimental data compared with the ASM. The modulated gradient model is also found to be a more reliable SGS stress modeling technique, compared with the Smagorinsky model, and it yields reasonable predictions of the mean flow and turbulence characteristics of a VAWT wake using its theoretically-determined model coefficient. Keywords: Vertical-axis wind turbines (VAWTs); VAWT wake; Large-eddy simulation; Actuator surface model; Actuator line model; Smagorinsky model; Modulated gradient model

  14. A survey of electric and hybrid vehicles simulation programs. Volume 2: Questionnaire responses

    NASA Technical Reports Server (NTRS)

    Bevan, J.; Heimburger, D. A.; Metcalfe, M. A.

    1978-01-01

    The data received in a survey conducted within the United States to determine the extent of development and capabilities of automotive performance simulation programs suitable for electric and hybrid vehicle studies are presented. The survey was conducted for the Department of Energy by NASA's Jet Propulsion Laboratory. Volume 1 of this report summarizes and discusses the results contained in Volume 2.

  15. Hydrothermal fluid flow and deformation in large calderas: Inferences from numerical simulations

    USGS Publications Warehouse

    Hurwitz, S.; Christiansen, L.B.; Hsieh, P.A.

    2007-01-01

    Inflation and deflation of large calderas is traditionally interpreted as being induced by volume change of a discrete source embedded in an elastic or viscoelastic half-space, though it has also been suggested that hydrothermal fluids may play a role. To test the latter hypothesis, we carry out numerical simulations of hydrothermal fluid flow and poroelastic deformation in calderas by coupling two numerical codes: (1) TOUGH2 [Pruess et al., 1999], which simulates flow in porous or fractured media, and (2) BIOT2 [Hsieh, 1996], which simulates fluid flow and deformation in a linearly elastic porous medium. In the simulations, high-temperature water (350??C) is injected at variable rates into a cylinder (radius 50 km, height 3-5 km). A sensitivity analysis indicates that small differences in the values of permeability and its anisotropy, the depth and rate of hydrothermal injection, and the values of the shear modulus may lead to significant variations in the magnitude, rate, and geometry of ground surface displacement, or uplift. Some of the simulated uplift rates are similar to observed uplift rates in large calderas, suggesting that the injection of aqueous fluids into the shallow crust may explain some of the deformation observed in calderas.

  16. Use of finite volume schemes for transition simulation

    NASA Technical Reports Server (NTRS)

    Fenno, Charles C., Jr.; Hassan, H. A.; Streett, Craig L.

    1991-01-01

    The use of finite-volume methods in the study of spatially and temporally evolving transitional flows over a flat plate is investigated. Schemes are developed with both central and upwind differencing. The compressible Navier-Stokes equations are solved with a Runge-Kutta time stepping scheme. Disturbances are determined using linear theory and superimposed at the inflow boundary. Time accurate integration is then used to allow temporal and spatial disturbance evolution. Characteristic-based boundary conditions are employed. The requirements of using finite-volume algorithms are studied in detail. Special emphasis is placed on difference schemes, grid resolution, and disturbance amplitudes. Moreover, comparisons are made with linear theory for small amplitude disturbances. Both subsonic and supersonic flows are considered, and it is shown that the locations of branch 1 and branch 2 of the neutral stability curve are well predicted, given sufficient resolution.

  17. Contrail Formation in Aircraft Wakes Using Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Paoli, R.; Helie, J.; Poinsot, T. J.; Ghosal, S.

    2002-01-01

    In this work we analyze the issue of the formation of condensation trails ("contrails") in the near-field of an aircraft wake. The basic configuration consists in an exhaust engine jet interacting with a wing-tip training vortex. The procedure adopted relies on a mixed Eulerian/Lagrangian two-phase flow approach; a simple micro-physics model for ice growth has been used to couple ice and vapor phases. Large eddy simulations have carried out at a realistic flight Reynolds number to evaluate the effects of turbulent mixing and wake vortex dynamics on ice-growth characteristics and vapor thermodynamic properties.

  18. Resonators for solid-state lasers with large-volume fundamental mode and high alignment stability

    SciTech Connect

    Magni, V.

    1986-01-01

    Resonators containing a focusing rod are thoroughly analyzed. It is shown that, as a function of the dioptric power of the rod, two stability zones of the same width exist and that the mode volume in the rod always presents a stationary point. At this point, the output power is insensitive to the focal length fluctuations, and the mode volume inside the rod is inversely proportional to the range of the input power for which the resonator is stable. The two zones are markedly different with respect to misalignment sensitivity, which is, in general, much greater in one zone than in the other. Two design procedures are presented for monomode solid-state laser resonators with large mode volume and low sensitivity both to focal length fluctuations and to misalignment.

  19. Parallel finite element simulation of large ram-air parachutes

    NASA Astrophysics Data System (ADS)

    Kalro, V.; Aliabadi, S.; Garrard, W.; Tezduyar, T.; Mittal, S.; Stein, K.

    1997-06-01

    In the near future, large ram-air parachutes are expected to provide the capability of delivering 21 ton payloads from altitudes as high as 25,000 ft. In development and test and evaluation of these parachutes the size of the parachute needed and the deployment stages involved make high-performance computing (HPC) simulations a desirable alternative to costly airdrop tests. Although computational simulations based on realistic, 3D, time-dependent models will continue to be a major computational challenge, advanced finite element simulation techniques recently developed for this purpose and the execution of these techniques on HPC platforms are significant steps in the direction to meet this challenge. In this paper, two approaches for analysis of the inflation and gliding of ram-air parachutes are presented. In one of the approaches the point mass flight mechanics equations are solved with the time-varying drag and lift areas obtained from empirical data. This approach is limited to parachutes with similar configurations to those for which data are available. The other approach is 3D finite element computations based on the Navier-Stokes equations governing the airflow around the parachute canopy and Newtons law of motion governing the 3D dynamics of the canopy, with the forces acting on the canopy calculated from the simulated flow field. At the earlier stages of canopy inflation the parachute is modelled as an expanding box, whereas at the later stages, as it expands, the box transforms to a parafoil and glides. These finite element computations are carried out on the massively parallel supercomputers CRAY T3D and Thinking Machines CM-5, typically with millions of coupled, non-linear finite element equations solved simultaneously at every time step or pseudo-time step of the simulation.

  20. Large eddy simulation of turbulent channel flow: ILLIAC 4 calculation

    NASA Technical Reports Server (NTRS)

    Kim, J.; Moin, P.

    1979-01-01

    The three-dimensional time dependent equations of motion were numerically integrated for fully-developed turbulent channel flow. A large scale flow field was obtained directly from the solution of these equations, and small scale field motions were simulated through an eddy viscosity model. The calculations were carried out on the ILLIAC 4 computer. The computed flow patterns show that the wall layer consists of coherent structures of low speed and high speed streaks alternating in the spanwise direction. These structures were absent in the regions away from the wall. Hot spots, small localized regions of very large turbulent shear stress, were frequently observed. The profiles of the pressure velocity-gradient correlations show a significant transfer of energy from the normal to the spanwise component of turbulent kinetic energy in the immediate neighborhood of the wall ('the splatting effect').

  1. A Large Motion Suspension System for Simulation of Orbital Deployment

    NASA Technical Reports Server (NTRS)

    Straube, T. M.; Peterson, L. D.

    1994-01-01

    This paper describes the design and implementation of a vertical degree of freedom suspension system which provides a constant force off-load condition to counter gravity over large displacements. By accommodating motions up to one meter for structures weighing up to 100 pounds, the system is useful for experiments which simulate the on-orbit deployment of spacecraft components. A unique aspect of this system is the combination of a large stroke passive off-load device augmented by electromotive torque actuated force feedback. The active force feedback has the effect of reducing breakaway friction by an order of magnitude over the passive system alone. The paper describes the development of the suspension hardware and the feedback control algorithm. Experiments were performed to verify the suspensions system's ability to provide a gravity off-load as well as its effect on the modal characteristics of a test article.

  2. Constitutive modeling of large inelastic deformation of amorphous polymers: Free volume and shear transformation zone dynamics

    NASA Astrophysics Data System (ADS)

    Voyiadjis, George Z.; Samadi-Dooki, Aref

    2016-06-01

    Due to the lack of the long-range order in their molecular structure, amorphous polymers possess a considerable free volume content in their inter-molecular space. During finite deformation, these free volume holes serve as the potential sites for localized permanent plastic deformation inclusions which are called shear transformation zones (STZs). While the free volume content has been experimentally shown to increase during the course of plastic straining in glassy polymers, thermal analysis of stored energy due to the deformation shows that the STZ nucleation energy decreases at large plastic strains. The evolution of the free volume, and the STZs number density and nucleation energy during the finite straining are formulated in this paper in order to investigate the uniaxial post-yield softening-hardening behavior of the glassy polymers. This study shows that the reduction of the STZ nucleation energy, which is correlated with the free volume increase, brings about the post-yield primary softening of the amorphous polymers up to the steady-state strain value; and the secondary hardening is a result of the increased number density of the STZs, which is required for large plastic strains, while their nucleation energy is stabilized beyond the steady-state strain. The evolutions of the free volume content and STZ nucleation energy are also used to demonstrate the effect of the strain rate, temperature, and thermal history of the sample on its post-yield behavior. The obtained results from the model are compared with the experimental observations on poly(methyl methacrylate) which show a satisfactory consonance.

  3. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.

    1990-01-01

    This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.

  4. Efficient simulation of fuel cell stacks with the volume averaging method

    NASA Astrophysics Data System (ADS)

    Roos, M.; Batawi, E.; Harnisch, U.; Hocker, Th.

    In fuel cell systems, a multitude of coupled physical and chemical processes take place within the assembly: fluid flow, diffusion, charge and heat transport, as well as electrochemical reactions. For design and optimisation purposes, direct numerical simulation of the full three-dimensional (3D) structure (using CFD tools) is often not feasible due to the large range of length scales that are associated with the various physical and chemical phenomena. However, since many fuel cell components such as gas ducts or current collectors are made of repetitive structures, volume averaging techniques can be employed to replace details of the original structure by their averaged counterparts. In this study, we present simulation results for SOFC fuel cells that are based on a two-step procedure: first, for all repetitive structures detailed 3D finite element simulations are used to obtain effective parameters for the transport equations and interaction terms for averaged quantities. Bipolar plates, for example, are characterised by their porosity and permeability with respect to fluid flow and by anisotropic material tensors for heat and charge transport. Similarly one obtains effective values for the Nernst potential and various kinetic parameters. The complex structural information is thereby cast into effective material properties. In a second step, we utilise these quantities to simulate fuel cells in 2D, thereby decreasing the computation time by several orders of magnitude. Depending on the design and optimisation goals, one chooses appropriate cuts perpendicular or along the stack axis. The resulting models provide current densities, temperature and species distributions as well as operation characteristics. We tested our method with the FEM-based multiphysics software NMSeses, which offers the flexibility to specify the necessary effective models. Results of simulation runs for Sulzer HEXIS-SOFC stacks are presented.

  5. Lossless compression of very large volume data with fast dynamic access

    NASA Astrophysics Data System (ADS)

    Zhao, Rongkai; Tao, Tao; Gabriel, Michael; Belford, Geneva

    2002-09-01

    The volumetric data set is important in many scientific and biomedical fields. Since such sets may be extremely large, a compression method is critical to store and transmit them. To achieve a high compression rate, most of the existing volume compression methods are lossy, which is usually unacceptable in biomedical applications. We developed a new context-based non-linear prediction method to preprocess the volume data set in order to effectively lower the prediction entropy. The prediction error is further encoded using Huffman code. Unlike the conventional methods, the volume is divided into cubical blocks to take advantage of the data's spatial locality. Instead of building one Huffman tree for each block, we developed a novel binning algorithm that build a Huffman tree for each group (bin) of blocks. Combining all the effects above, we achieved an excellent compression rate compared to other lossless volume compression methods. In addition, an auxiliary data structure, Scalable Hyperspace File (SHSF) is used to index the huge volume so that we can obtain many other benefits including parallel construction, on-the-fly accessing of compressed data without global decompression, fast previewing, efficient background compressing, and scalability etc.

  6. Geophysics Under Pressure: Large-Volume Presses Versus the Diamond-Anvil Cell

    NASA Astrophysics Data System (ADS)

    Hazen, R. M.

    2002-05-01

    Prior to 1970, the legacy of Harvard physicist Percy Bridgman dominated high-pressure geophysics. Massive presses with large-volume devices, including piston-cylinder, opposed-anvil, and multi-anvil configurations, were widely used in both science and industry to achieve a range of crustal and upper mantle temperatures and pressures. George Kennedy of UCLA was a particularly influential advocate of large-volume apparatus for geophysical research prior to his death in 1980. The high-pressure scene began to change in 1959 with the invention of the diamond-anvil cell, which was designed simultaneously and independently by John Jamieson at the University of Chicago and Alvin Van Valkenburg at the National Bureau of Standards in Washington, DC. The compact, inexpensive diamond cell achieved record static pressures and had the advantage of optical access to the high-pressure environment. Nevertheless, members of the geophysical community, who favored the substantial sample volumes, geothermally relevant temperature range, and satisfying bulk of large-volume presses, initially viewed the diamond cell with indifference or even contempt. Several factors led to a gradual shift in emphasis from large-volume presses to diamond-anvil cells in geophysical research during the 1960s and 1970s. These factors include (1) their relatively low cost at time of fiscal restraint, (2) Alvin Van Valkenburg's new position as a Program Director at the National Science Foundation in 1964 (when George Kennedy's proposal for a Nation High-Pressure Laboratory was rejected), (3) the development of lasers and micro-analytical spectroscopic techniques suitable for analyzing samples in a diamond cell, and (4) the attainment of record pressures (e.g., 100 GPa in 1975 by Mao and Bell at the Geophysical Laboratory). Today, a more balanced collaborative approach has been adopted by the geophysics and mineral physics community. Many high-pressure laboratories operate a new generation of less expensive

  7. HYBRID BRIDGMAN ANVIL DESIGN: AN OPTICAL WINDOW FOR IN-SITU SPECTROSCOPY IN LARGE VOLUME PRESSES

    SciTech Connect

    Lipp, M J; Evans, W J; Yoo, C S

    2005-07-29

    The absence of in-situ optical probes for large volume presses often limits their application to high-pressure materials research. In this paper, we present a unique anvil/optical window-design for use in large volume presses, which consists of an inverted diamond anvil seated in a Bridgman type anvil. A small cylindrical aperture through the Bridgman anvil ending at the back of diamond anvil allows optical access to the sample chamber and permits direct optical spectroscopy measurements, such as ruby fluorescence (in-situ pressure) or Raman spectroscopy. This performance of this anvil-design has been demonstrated by loading KBr to a pressure of 14.5 GPa.

  8. Assembly, operation and disassembly manual for the Battelle Large Volume Water Sampler (BLVWS)

    SciTech Connect

    Thomas, V.W.; Campbell, R.M.

    1984-12-01

    Assembly, operation and disassembly of the Battelle Large Volume Water Sampler (BLVWS) are described in detail. Step by step instructions of assembly, general operation and disassembly are provided to allow an operator completely unfamiliar with the sampler to successfully apply the BLVWS to his research sampling needs. The sampler permits concentration of both particulate and dissolved radionuclides from large volumes of ocean and fresh water. The water sample passes through a filtration section for particle removal then through sorption or ion exchange beds where species of interest are removed. The sampler components which contact the water being sampled are constructed of polyvinylchloride (PVC). The sampler has been successfully applied to many sampling needs over the past fifteen years. 9 references, 8 figures.

  9. Assessment of dynamic closure for premixed combustion large eddy simulation

    NASA Astrophysics Data System (ADS)

    Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan

    2015-09-01

    Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.

  10. Large-timestep mover for particle simulations of arbitrarilymagnetized species

    SciTech Connect

    Cohen, R.H.; Friedman, A.; Grote, D.P.; Vay, J-L.

    2007-03-26

    For self-consistent ion-beam simulations including electron motion, it is desirable to be able to follow electron dynamics accurately without being constrained by the electron cyclotron timescale. To this end, we have developed a particle-advance that interpolates between full particle dynamics and drift motion. By making a proper choice of interpolation parameter, simulation particles experience physically correct parallel dynamics, drift motion, and gyroradius when the timestep is large compared to the cyclotron period, though the effective gyro frequency is artificially low; in the opposite timestep limit, the method approaches a conventional Boris particle push. By combining this scheme with a Poisson solver that includes an interpolated form of the polarization drift in the dielectric response, the movers utility can be extended to higher-density problems where the plasma frequency of the species being advanced exceeds its cyclotron frequency. We describe a series of tests of the mover and its application to simulation of electron clouds in heavy-ion accelerators.

  11. Large eddy simulation and study of the urban boundary layer

    NASA Astrophysics Data System (ADS)

    Miao, Shiguang; Jiang, Weimei

    2004-08-01

    Based on a pseudo-spectral large eddy simulation (LES) model, an LES model with an anisotropy turbulent kinetic energy (TKE) closure model and an explicit multi-stage third-order Runge-Kutta scheme is established. The modeling and analysis show that the LES model can simulate the planetary boundary layer (PBL) with a uniform underlying surface under various stratifications very well. Then, similar to the description of a forest canopy, the drag term on momentum and the production term of TKE by subgrid city buildings are introduced into the LES equations to account for the area-averaged effect of the subgrid urban canopy elements and to simulate the meteorological fields of the urban boundary layer (UBL). Numerical experiments and comparison analysis show that: (1) the result from the LES of the UBL with a proposed formula for the drag coefficient is consistent and comparable with that from wind tunnel experiments and an urban subdomain scale model; (2) due to the effect of urban buildings, the wind velocity near the canopy is decreased, turbulence is intensified, TKE, variance, and momentum flux are increased, the momentum and heat flux at the top of the PBL are increased, and the development of the PBL is quickened; (3) the height of the roughness sublayer (RS) of the actual city buildings is the maximum building height (1.5 3 times the mean building height), and a constant flux layer (CFL) exists in the lower part of the UBL.

  12. Large Eddy Simulations of Colorless Distributed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Abdulrahman, Husam F.; Jaberi, Farhad; Gupta, Ashwani

    2014-11-01

    Development of efficient and low-emission colorless distributed combustion (CDC) systems for gas turbine applications require careful examination of the role of various flow and combustion parameters. Numerical simulations of CDC in a laboratory-scale combustor have been conducted to carefully examine the effects of these parameters on the CDC. The computational model is based on a hybrid modeling approach combining large eddy simulation (LES) with the filtered mass density function (FMDF) equations, solved with high order numerical methods and complex chemical kinetics. The simulated combustor operates based on the principle of high temperature air combustion (HiTAC) and has shown to significantly reduce the NOx, and CO emissions while improving the reaction pattern factor and stability without using any flame stabilizer and with low pressure drop and noise. The focus of the current work is to investigate the mixing of air and hydrocarbon fuels and the non-premixed and premixed reactions within the combustor by the LES/FMDF with the reduced chemical kinetic mechanisms for the same flow conditions and configurations investigated experimentally. The main goal is to develop better CDC with higher mixing and efficiency, ultra-low emission levels and optimum residence time. The computational results establish the consistency and the reliability of LES/FMDF and its Lagrangian-Eulerian numerical methodology.

  13. Unsteady RANS and Large Eddy simulations of multiphase diesel injection

    NASA Astrophysics Data System (ADS)

    Philipp, Jenna; Green, Melissa; Akih-Kumgeh, Benjamin

    2015-11-01

    Unsteady Reynolds Averaged Navier-Stokes (URANS) and Large Eddy Simulations (LES) of two-phase flow and evaporation of high pressure diesel injection into a quiescent, high temperature environment is investigated. Unsteady RANS and LES are turbulent flow simulation approaches used to determine complex flow fields. The latter allows for more accurate predictions of complex phenomena such as turbulent mixing and physio-chemical processes associated with diesel combustion. In this work we investigate a high pressure diesel injection using the Euler-Lagrange method for multiphase flows as implemented in the Star-CCM+ CFD code. A dispersed liquid phase is represented by Lagrangian particles while the multi-component gas phase is solved using an Eulerian method. Results obtained from the two approaches are compared with respect to spray penetration depth and air entrainment. They are also compared with experimental data taken from the Sandia Engine Combustion Network for ``Spray A''. Characteristics of primary and secondary atomization are qualitatively evaluated for all simulation modes.

  14. Large eddy simulation of a pumped- storage reservoir

    NASA Astrophysics Data System (ADS)

    Launay, Marina; Leite Ribeiro, Marcelo; Roman, Federico; Armenio, Vincenzo

    2016-04-01

    The last decades have seen an increasing number of pumped-storage hydropower projects all over the world. Pumped-storage schemes move water between two reservoirs located at different elevations to store energy and to generate electricity following the electricity demand. Thus the reservoirs can be subject to important water level variations occurring at the daily scale. These new cycles leads to changes in the hydraulic behaviour of the reservoirs. Sediment dynamics and sediment budgets are modified, sometimes inducing problems of erosion and deposition within the reservoirs. With the development of computer performances, the use of numerical techniques has become popular for the study of environmental processes. Among numerical techniques, Large Eddy Simulation (LES) has arisen as an alternative tool for problems characterized by complex physics and geometries. This work uses the LES-COAST Code, a LES model under development in the framework of the Seditrans Project, for the simulation of an Upper Alpine Reservoir of a pumped-storage scheme. Simulations consider the filling (pump mode) and emptying (turbine mode) of the reservoir. The hydraulic results give a better understanding of the processes occurring within the reservoir. They are considered for an assessment of the sediment transport processes and of their consequences.

  15. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  16. Large eddy simulation of mechanical mixing in anaerobic digesters.

    PubMed

    Wu, Binxin

    2012-03-01

    A comprehensive study of anaerobic digestion requires an advanced turbulence model technique to accurately predict mixing flow patterns because the digestion process that involves mass transfer between anaerobes and their substrates is primarily dependent on detailed information about the fine structure of turbulence in the digesters. This study presents a large eddy simulation (LES) of mechanical agitation of non-Newtonian fluids in anaerobic digesters, in which the sliding mesh method is used to characterize the impeller rotation. The three subgrid scale (SGS) models investigated are: (i) Smagorinsky-Lilly model, (ii) wall-adapting local eddy-viscosity model, and (iii) kinetic energy transport (KET) model. The simulation results show that the three SGS models produce very similar flow fields. A comparison of the simulated and measured axial velocities indicates that the LES profile shapes are in general agreement with the experimental data but they differ markedly in velocity magnitudes. A check of impeller power and flow numbers demonstrates that all the SGS models give excellent predictions, with the KET model performing the best. Moreover, the performance of six Reynolds-averaged Navier-Stokes turbulence models are assessed and compared with the LES results. PMID:22038563

  17. Rapid Adaptive Optical Recovery of Optimal Resolution over LargeVolumes

    PubMed Central

    Wang, Kai; Milkie, Dan; Saxena, Ankur; Engerer, Peter; Misgeld, Thomas; Bronner, Marianne E.; Mumm, Jeff; Betzig, Eric

    2014-01-01

    Using a de-scanned, laser-induced guide star and direct wavefront sensing, we demonstrate adaptive correction of complex optical aberrations at high numerical aperture and a 14 ms update rate. This permits us to compensate for the rapid spatial variation in aberration often encountered in biological specimens, and recover diffraction-limited imaging over large (> 240 μm)3 volumes. We applied this to image fine neuronal processes and subcellular dynamics within the zebrafish brain. PMID:24727653

  18. Shuttle mission simulator requirements report, volume 1, revision A

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The tasks are defined required to design, develop produce, and field support a shuttle mission simulator for training crew members and ground support personnel. The requirements for program management, control, systems engineering, design and development are discussed along with the design and construction standards, software design, control and display, communication and tracking, and systems integration.

  19. Program to Optimize Simulated Trajectories (POST). Volume 3: Programmer's manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to the programmer and relating to the program to optimize simulated trajectories (POST) is presented. Topics discussed include: program structure and logic, subroutine listings and flow charts, and internal FORTRAN symbols. The POST core requirements are summarized along with program macrologic.

  20. Shuttle mission simulator baseline definition report, volume 2

    NASA Technical Reports Server (NTRS)

    Dahlberg, A. W.; Small, D. E.

    1973-01-01

    The baseline definition report for the space shuttle mission simulator is presented. The subjects discussed are: (1) the general configurations, (2) motion base crew station, (3) instructor operator station complex, (4) display devices, (5) electromagnetic compatibility, (6) external interface equipment, (7) data conversion equipment, (8) fixed base crew station equipment, and (9) computer complex. Block diagrams of the supporting subsystems are provided.

  1. Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual

    NASA Technical Reports Server (NTRS)

    Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.

  2. Anatomically Detailed and Large-Scale Simulations Studying Synapse Loss and Synchrony Using NeuroBox

    PubMed Central

    Breit, Markus; Stepniewski, Martin; Grein, Stephan; Gottmann, Pascal; Reinhardt, Lukas; Queisser, Gillian

    2016-01-01

    The morphology of neurons and networks plays an important role in processing electrical and biochemical signals. Based on neuronal reconstructions, which are becoming abundantly available through databases such as NeuroMorpho.org, numerical simulations of Hodgkin-Huxley-type equations, coupled to biochemical models, can be performed in order to systematically investigate the influence of cellular morphology and the connectivity pattern in networks on the underlying function. Development in the area of synthetic neural network generation and morphology reconstruction from microscopy data has brought forth the software tool NeuGen. Coupling this morphology data (either from databases, synthetic, or reconstruction) to the simulation platform UG 4 (which harbors a neuroscientific portfolio) and VRL-Studio, has brought forth the extendible toolbox NeuroBox. NeuroBox allows users to perform numerical simulations on hybrid-dimensional morphology representations. The code basis is designed in a modular way, such that e.g., new channel or synapse types can be added to the library. Workflows can be specified through scripts or through the VRL-Studio graphical workflow representation. Third-party tools, such as ImageJ, can be added to NeuroBox workflows. In this paper, NeuroBox is used to study the electrical and biochemical effects of synapse loss vs. synchrony in neurons, to investigate large morphology data sets within detailed biophysical simulations, and used to demonstrate the capability of utilizing high-performance computing infrastructure for large scale network simulations. Using new synapse distribution methods and Finite Volume based numerical solvers for compartment-type models, our results demonstrate how an increase in synaptic synchronization can compensate synapse loss at the electrical and calcium level, and how detailed neuronal morphology can be integrated in large-scale network simulations. PMID:26903818

  3. Hybrid-toroidal anvil: a replacement for the conventional WC anvil used for the large volume cubic high pressure apparatus

    NASA Astrophysics Data System (ADS)

    Han, Qi-Gang; Yang, Wen-Ke; Jia, Xiao-Peng; Ma, Hong-An

    2014-10-01

    We propose a design and operation of a hybrid-toroidal anvil used for the large volume cubic high pressure apparatus (LV-CHPA), such that it is possible to obtain a higher sintered quality, less weight and cost of tungsten carbide (WC) anvil than the conventional anvil. We use the finite element simulations to show the distributions of the stress on the surface and in the bulk of the WC anvils, and conclude that, for a given load on the hybrid-toroidal anvil, the volume of the compressed press medium has increased by 4.88%, and the rate of the transmitted pressure has increased by 6.72% compared with the conventional anvil. Furthermore, the advantages of the hybrid-toroidal anvil are that the movement of anvils increases by 37.14% and the growth rate of the fatigue crack decreases by 40%. This has been proved by the high pressure experiments. This work gives an approach to optimize the WC anvils used for the LV-CHPA and presents a simple method to achieve the higher sample pressure and the larger sample volume.

  4. Large Eddy Simulations of Turbulent Flow Over a Wavy Wall

    NASA Astrophysics Data System (ADS)

    Sundaram, Shivshankar; Avva, Ram

    1997-11-01

    Turbulent, separated flow over a wavy wall was simulated using CFD-ACE, a general purpose Navier-Stokes code. The code employs finite-volume formulation and body-fitted curvilinear (BFC) grids. The flow channel consists of a flat upper wall at a mean distance, H, from a sinusoidally varying lower wall (amplitude of 0.05H and a wavelength of 1H). The Reynolds number in terms of bulk velocity and H was 6760. Computations used both a coarse grid (40x40x20;4waves) and a fine grid (60x40x40;2 waves). The spanwise extent was 2H. Periodic boundary conditions were enforced in the streamwise and spanwise directions. Both Smagorinsky (with van Driest damping) and Dynamic models were employed. The Dynamic model yielded better overall results. Present separation and reattachment lengths of 0.13 and 0.64 are in excellent agreement with prior DNS and experiment. Pressure, friction velocity over the wavy wall and mean cross-channel profiles were indistinguishable from prior data. A turbulent mixing layer and a growing boundary layer downstream of reattachment were identified using peaks in turbulence intensities. The level and location of these peaks were in good agreement with DNS.

  5. Large-eddy simulation of very-large-scale motions in atmospheric boundary-layer flows

    NASA Astrophysics Data System (ADS)

    Fang, Jiannong; Porté-Agel, Fernando

    2015-04-01

    In the last few decades, laboratory experiments and direct numerical simulations of turbulent boundary layers, performed at low to moderate Reynolds numbers, have found very-large-scale motions (VLSMs) in the logarithmic and outer regions. The size of VLSMs was found to be 10-20 times as large as the boundary-layer thickness. Recently, few studies based on field experiments examined the presence of VLSMs in neutral atmospheric boundary-layer flows, which are invariably at very high Reynolds numbers. Very large scale structures similar to those observed in laboratory-scale experiments have been found and characterized. However, it is known that field measurements are more challenging than laboratory-based measurements, and can lack resolution and statistical convergence. Such challenges have implications on the robustness of the analysis, which may be further adversely affected by the use of Taylor's hypothesis to convert time series to spatial data. We use large-eddy simulation (LES) to investigate VLSMs in atmospheric boundary-layer flows. In order to make sure that the largest flow structures are properly resolved, the horizontal domain size is chosen to be much larger than the standard domain size. It is shown that the contributions to the resolved turbulent kinetic energy and shear stress from VLSMs are significant. Therefore, the large computational domain adopted here is essential for the purpose of investigating VLSMs. The spatially coherent structures associated with VLSMs are characterized through flow visualization and statistical analysis. The instantaneous velocity fields in horizontal planes give evidence of streamwise-elongated flow structures of low-speed fluid with negative fluctuation of the streamwise velocity component, and which are flanked on either side by similarly elongated high-speed structures. The pre-multiplied power spectra and two-point correlations indicate that the scales of these streak-like structures are very large. These features

  6. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  7. Shuttle vehicle and mission simulation requirements report, volume 1

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1972-01-01

    The requirements for the space shuttle vehicle and mission simulation are developed to analyze the systems, mission, operations, and interfaces. The requirements are developed according to the following subject areas: (1) mission envelope, (2) orbit flight dynamics, (3) shuttle vehicle systems, (4) external interfaces, (5) crew procedures, (6) crew station, (7) visual cues, and (8) aural cues. Line drawings and diagrams of the space shuttle are included to explain the various systems and components.

  8. WEST-3 wind turbine simulator development. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    This report is a summary description of WEST-3, a new real-time wind turbine simulator developed by Paragon Pacific Inc. WEST-3 is an all digital, fully programmable, high performance parallel processing computer. Contained in the report are descriptions of the WEST-3 hardware and software. WEST-3 consists of a network of Computational Units (CUs) working in parallel. Each CU is a custom designed high speed digital processor operating independently of other CUs. The CU, which is the main building block of the system, is described in some detail. A major contributor to the high performance of the system is the use a unique method for transferring data among the CUs. The software aspects of WEST-3 covered in the report include the preparation of the simulation model (reformulation, scaling and normalization), and the use of the system software (Translator, Linker, Assembler and Loader). Also given is a description of the wind turbine simulation model used in WEST-3, and some sample results from a study conducted to validate the system. Finally, efforts currently underway to enhance the user friendliness of the system are outlined; these include the 32-bit floating point capability, and major improvements in system software.

  9. Large Eddy Simulation of Flow and Sediment Transport over Dunes

    NASA Astrophysics Data System (ADS)

    Agegnehu, G.; Smith, H. D.

    2012-12-01

    Understanding the nature of flow over bedforms has a great importance in fluvial and coastal environments. For example, a bedform is one source of energy dissipation in water waves outside the surf zone in coastal environments. In rivers, the migration of dunes often affects the stability of the river bed and banks. In general, when a fluid flows over a sediment bed, the sediment transport generated by the interaction of the flow field with the bed results in the periodic deformation of the bed in the form of dunes. Dunes generally reach an equilibrium shape, and slowly propagate in the direction of the flow, as sand is lifted in the high shear regions, and redeposited in the separated flow areas. Different numerical approaches have been used in the past to study the flow and sediment transport over bedforms. In most research works, Reynolds Averaged Navier Stokes (RANS) equations are employed to study fluid motions over ripples and dunes. However, evidences suggests that these models can not represent key turbulent quantities in unsteady boundary layers. The use of Large Eddy Simulation (LES) can resolve a much larger range of smaller scales than RANS. Moreover, unsteady simulations using LES give vital turbulent quantities which can help to study fluid motion and sediment transport over dunes. For this steady, we use a three-dimensional, non-hydrostatic model, OpenFOAM. It is a freely available tool which has different solvers to simulate specific problems in engineering and fluid mechanics. Our objective is to examine the flow and sediment transport from numerical stand point for bed geometries that are typical of fixed dunes. At the first step, we performed Large Eddy Simulation of the flow over dune geometries based on the experimental data of Nelson et al. (1993). The instantaneous flow field is investigated with special emphasis on the occurrence of coherent structures. To assess the effect of bed geometries on near bed turbulence, we considered different

  10. Evaluation of the pressure-volume-temperature (PVT) data of water from experiments and molecular simulations since 1990

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Hu, Jiawen; Mao, Shide; Zhang, Zhigang

    2015-08-01

    Since 1990, many groups of pressure-volume-temperature (PVT) data from experiments and molecular dynamics (MD) or Monte Carlo (MC) simulations have been reported for supercritical and subcritical water. In this work, fifteen groups of PVT data (253.15-4356 K and 0-90.5 GPa) are evaluated in detail with the aid of the highly accurate IAPWS-95 formulation. The evaluation gives the following results: (1) Six datasets are found to be of good accuracy. They include the simulated results based on SPCE potential above 100 MPa and those derived from sound velocity measurements, but the simulated results below 100 MPa have large uncertainties. (2) The data from measurements with a piston cylinder apparatus and simulations with an exp-6 potential contain large uncertainties and systematic deviations. (3) The other seven datasets show obvious systematic deviations. They include those from experiments with synthesized fluid inclusion techniques (three groups), measured velocities of sound (one group), and automated high-pressure dilatometer (one group) and simulations with TIP4P potential (two groups), where the simulated data based on TIP4P potential below 200 MPa have large uncertainties. (4) The simulated data but those below 1 GPa agree with each other within 2-3%, and mostly within 2%. The data from fluid inclusions show similar systematic deviations, which are less than 2-5%. The data obtained with automated high-pressure dilatometer and those derived from sound velocity measurements agree with each other within 0.3-0.6% in most cases, except for those above 10 GPa. In principle, the systematic deviations mentioned above, except for those of the simulated data below 1 GPa, can be largely eliminated or significantly reduced by appropriate corrections, and then the accuracy of the relevant data can be improved significantly. These are very important for the improvement of experiments or simulations and the refinement and correct use of the PVT data in developing

  11. Hyperbolic self-gravity solver for large scale hydrodynamical simulations

    NASA Astrophysics Data System (ADS)

    Hirai, Ryosuke; Nagakura, Hiroki; Okawa, Hirotada; Fujisawa, Kotaro

    2016-04-01

    A new computationally efficient method has been introduced to treat self-gravity in Eulerian hydrodynamical simulations. It is applied simply by modifying the Poisson equation into an inhomogeneous wave equation. This roughly corresponds to the weak field limit of the Einstein equations in general relativity, and as long as the gravitation propagation speed is taken to be larger than the hydrodynamical characteristic speed, the results agree with solutions for the Poisson equation. The solutions almost perfectly agree if the domain is taken large enough, or appropriate boundary conditions are given. Our new method cannot only significantly reduce the computational time compared with existent methods, but is also fully compatible with massive parallel computation, nested grids, and adaptive mesh refinement techniques, all of which can accelerate the progress in computational astrophysics and cosmology.

  12. Large-eddy simulation of cavitating nozzle and jet flows

    NASA Astrophysics Data System (ADS)

    Örley, F.; Trummler, T.; Hickel, S.; Mihatsch, M. S.; Schmidt, S. J.; Adams, N. A.

    2015-12-01

    We present implicit large-eddy simulations (LES) to study the primary breakup of cavitating liquid jets. The considered configuration, which consists of a rectangular nozzle geometry, adopts the setup of a reference experiment for validation. The setup is a generic reproduction of a scaled-up automotive fuel injector. Modelling of all components (i.e. gas, liquid, and vapor) is based on a barotropic two-fluid two-phase model and employs a homogenous mixture approach. The cavitating liquid model assumes thermodynamic- equilibrium. Compressibility of all phases is considered in order to capture pressure wave dynamics of collapse events. Since development of cavitation significantly affects jet break-up characteristics, we study three different operating points. We identify three main mechanisms which induce primary jet break-up: amplification of turbulent fluctuations, gas entrainment, and collapse events near the liquid-gas interface.

  13. Large-eddy simulation of turbulence in steam generators

    SciTech Connect

    Bagwell, T.G.; Hassan, Y.A. ); Steininger D.A. )

    1989-11-01

    A major problem associated with steam generators is excessive tube vibration caused by turbulent-flow buffeting and fluid-elastic excitation. Vibration can lead to tube rupture or wear, necessitating tube plugging and reducing the availability of the steam generator. The fluid/structure interaction phenomenon that causes fluid-elastic tube excitation is unknown at present. The current investigation defines the spectral characteristics of turbulent flow entering the Westinghouse D4 steam generator tube bundles using the large-eddy simulation (LES) technique. Due to the recent availability of supercomputers, LES is being considered as a possible engineering design analysis tool. The information from this study will provide input for defining the temporally fluctuating forces on steam generator tube banks. The GUST code was used to analyze the water box of a Westinghouse model D4 steam generator.

  14. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  15. Arc plasma simulation of the KAERI large ion sourcea)

    NASA Astrophysics Data System (ADS)

    In, S. R.; Jeong, S. H.; Kim, T. S.

    2008-02-01

    The KAERI large ion source, developed for the KSTAR NBI system, recently produced ion beams of 100keV, 50A levels in the first half campaign of 2007. These results seem to be the best performance of the present ion source at a maximum available input power of 145kW. A slight improvement in the ion source is certainly necessary to attain the final goal of an 8MW ion beam. Firstly, the experimental results were analyzed to differentiate the cause and effect for the insufficient beam currents. Secondly, a zero dimensional simulation was carried out on the ion source plasma to identify which factors control the arc plasma and to find out what improvements can be expected.

  16. Numerical aerodynamic simulation facility preliminary study, volume 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A technology forecast was established for the 1980-1985 time frame and the appropriateness of various logic and memory technologies for the design of the numerical aerodynamic simulation facility was assessed. Flow models and their characteristics were analyzed and matched against candidate processor architecture. Metrics were established for the total facility, and housing and support requirements of the facility were identified. An overview of the system is presented, with emphasis on the hardware of the Navier-Stokes solver, which is the key element of the system. Software elements of the system are also discussed.

  17. Large-N volume independence in conformal and confining gauge theories

    SciTech Connect

    Unsal, Mithat; Yaffe, Laurence G.; /Washington U., Seattle

    2010-08-26

    Consequences of large N volume independence are examined in conformal and confining gauge theories. In the large N limit, gauge theories compactified on R{sup d-k} x (S{sup 1}){sup k} are independent of the S{sup 1} radii, provided the theory has unbroken center symmetry. In particular, this implies that a large N gauge theory which, on R{sup d}, flows to an IR fixed point, retains the infinite correlation length and other scale invariant properties of the decompactified theory even when compactified on R{sup d-k} x (S{sup 1}){sup k}. In other words, finite volume effects are 1/N suppressed. In lattice formulations of vector-like theories, this implies that numerical studies to determine the boundary between confined and conformal phases may be performed on one-site lattice models. In N = 4 supersymmetric Yang-Mills theory, the center symmetry realization is a matter of choice: the theory on R{sup 4-k} x (S{sup 1}){sup k} has a moduli space which contains points with all possible realizations of center symmetry. Large N QCD with massive adjoint fermions and one or two compactified dimensions has a rich phase structure with an infinite number of phase transitions coalescing in the zero radius limit.

  18. Diurnal fluctuations in brain volume: Statistical analyses of MRI from large populations.

    PubMed

    Nakamura, Kunio; Brown, Robert A; Narayanan, Sridar; Collins, D Louis; Arnold, Douglas L

    2015-09-01

    We investigated fluctuations in brain volume throughout the day using statistical modeling of magnetic resonance imaging (MRI) from large populations. We applied fully automated image analysis software to measure the brain parenchymal fraction (BPF), defined as the ratio of the brain parenchymal volume and intracranial volume, thus accounting for variations in head size. The MRI data came from serial scans of multiple sclerosis (MS) patients in clinical trials (n=755, 3269 scans) and from subjects participating in the Alzheimer's Disease Neuroimaging Initiative (ADNI, n=834, 6114 scans). The percent change in BPF was modeled with a linear mixed effect (LME) model, and the model was applied separately to the MS and ADNI datasets. The LME model for the MS datasets included random subject effects (intercept and slope over time) and fixed effects for the time-of-day, time from the baseline scan, and trial, which accounted for trial-related effects (for example, different inclusion criteria and imaging protocol). The model for ADNI additionally included the demographics (baseline age, sex, subject type [normal, mild cognitive impairment, or Alzheimer's disease], and interaction between subject type and time from baseline). There was a statistically significant effect of time-of-day on the BPF change in MS clinical trial datasets (-0.180 per day, that is, 0.180% of intracranial volume, p=0.019) as well as the ADNI dataset (-0.438 per day, that is, 0.438% of intracranial volume, p<0.0001), showing that the brain volume is greater in the morning. Linearly correcting the BPF values with the time-of-day reduced the required sample size to detect a 25% treatment effect (80% power and 0.05 significance level) on change in brain volume from 2 time-points over a period of 1year by 2.6%. Our results have significant implications for future brain volumetric studies, suggesting that there is a potential acquisition time bias that should be randomized or statistically controlled to

  19. Large-eddy simulation of the turbulent flow in the downstream region of a backward-facing step

    NASA Astrophysics Data System (ADS)

    Silveira Neto, A.; Grand, D.; Metais, O.; Lesieur, M.

    1991-05-01

    A numerical simulation of a complex turbulent shear flow using large-eddy simulation techniques is carried out. The filtered Navier-Stokes equations are solved with a finite-volume method. The subgrid model is a local adaptation to the physical space of isotropic spectral eddy-viscosity models. The statistics of the mean field are in good agreement with the experimental data available, corresponding to a low step. Calculations in a high-step case show that the eddy structure of the flow presents striking analogies with the plane shear layers, with large billows shed behind the step, and longitudinal hairpin vortices strained between these billows.

  20. Detection and Volume Estimation of Large Landslides by Using Multi-temporal Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Hsieh, Yu-chung; Hou, Chin-Shyong; Chan, Yu-Chang; Hu, Jyr-Ching; Fei, Li-Yuan; Chen, Hung-Jen; Chiu, Cheng-Lung

    2014-05-01

    Large landslides are frequently triggered by strong earthquakes and heavy rainfalls in the mountainous areas of Taiwan. The heavy rainfall brought by the Typhoon Morakot has triggered a large amount of landslides. The most unfortunate case occurred in the Xiaolin village, which was totally demolished by a catastrophic landslide in less than a minute. Continued and detailed study of the characteristics of large landslides is urgently needed to mitigate loss of lives and properties in the future. Traditionally known techniques cannot effectively extract landslide parameters, such as depth, amount and volume, which are essential in all the phases of landslide assessment. In addition, it is very important to record the changes of landslide deposits after the landslide events as accurately as possible to better understand the landslide erosion process. The acquisition of digital elevation models (DEMs) is considered necessary for achieving accurate, effective and quantitative landslide assessments. A new technique is presented in this study for quickly assessing extensive areas of large landslides. The technique uses DEMs extracted from several remote sensing approaches, including aerial photogrammetry, airborne LiDAR and UAV photogrammetry. We chose a large landslide event that occurred after Typhoon Sinlaku in Meiyuan the mount, central Taiwan in 2008. We collected and processed six data sets, including aerial photos, airborne LiDAR data and UAVphotos, at different times from 2005 to 2013. Our analyses show the landslide volume being 17.14 × 106 cubic meters, deposition volume being 12.75 × 106 cubic meters, and about 4.38 × 106 cubic meters being washed out of the region. Residual deposition ratio of this area is about 74% in 2008; while, after a few years, the residual deposition ratio is down below 50%. We also analyzed riverbed changes and sediment transfer patterns from 2005 to 2013 by multi-temporal remote sensing data with desirable accuracy. The developed

  1. Management of large volume CT contrast medium extravasation injury: technical refinement and literature review.

    PubMed

    Schaverien, Mark V; Evison, Demetrius; McCulley, Stephen J

    2008-01-01

    The incidence of radiographic contrast medium extravasation is on the rise due to the rapid increase in availability of contrast enhanced imaging. There is no consensus, however, regarding its management. There is a wide spectrum of clinical presentations, ranging from localised erythema and oedema to skin necrosis, which is related to the osmolarity and volume of the extravasated contrast medium. It is not possible to predict the degree of final tissue injury at initial examination. The increase in use of automated bolus injection has led to an increase in incidence of large volume extravasation injuries. Here we present a review of the literature regarding clinical presentation, risk factors, and management of contrast extravasation injuries. We also report the management of a large volume computed tomography contrast extravasation injury following mechanical bolus injection using a combination of liposuction and saline washout as described by Gault, and the use of compression by a Rhys-Davies exsanguinator as a technical refinement to achieve immediate resolution of the soft tissue oedema. PMID:17459795

  2. Direct Numerical Simulation of Stable Channel Flow at Large Stability

    NASA Astrophysics Data System (ADS)

    Nieuwstadt, F. T. M.

    2005-08-01

    We consider a model for the stable atmospheric boundary at large stability, i.e. near the limit where turbulence is no longer able to survive. The model is a plane horizontally homogeneous channel flow, which is driven by a constant pressure gradient and which has a no-slip wall at the bottom and a free-slip wall at the top. At the lower wall a constant negative temperature flux is imposed. First, we consider a direct numerical simulation of the same channel flow. The simulation is computed with the neutral channel flow as initial condition and computed as a function of time for various values of the stability parameter h/L, where h is the channel height and L is related to the Obukhov length. We find that a turbulent solution is only possible for h/L < 1.25 and for larger values turbulence decays. Next, we consider a theoretical model for this channel flow based on a simple gradient transfer closure. The resulting equations allow an exact solution for the case of a stationary flow. The velocity profile for this solution is almost linear as a function of height in most of the channel. In the limit of infinite Reynolds number, the temperature profile has a logarithmic singularity at the upper wall of the channel. For the cases where a turbulent flow is maintained in the numerical simulation, we find that the velocity and temperature profiles are in good agreement with the results of the theoretical model when the effects of the surface layer on the exchange coefficients are taken into account.

  3. Large eddy simulation of a lifted turbulent jet flame

    SciTech Connect

    Ferraris, S.A.; Wen, J.X.

    2007-09-15

    The flame index concept for large eddy simulation developed by Domingo et al. [P. Domingo, L. Vervisch, K. Bray, Combust. Theory Modell. 6 (2002) 529-551] is used to capture the partially premixed structure at the leading point and the dual combustion regimes further downstream on a turbulent lifted flame, which is composed of premixed and nonpremixed flame elements each separately described under a flamelet assumption. Predictions for the lifted methane/air jet flame experimentally tested by Mansour [M.S. Mansour, Combust. Flame 133 (2003) 263-274] are made. The simulation covers a wide domain from the jet exit to the far flow field. Good agreement with the data for the lift-off height and the mean mixture fraction has been achieved. The model has also captured the double flames, showing a configuration similar to that of the experiment which involves a rich premixed branch at the jet center and a diffusion branch in the outer region which meet at the so-called triple point at the flame base. This basic structure is contorted by eddies coming from the jet exit but remains stable at the lift-off height. No lean premixed branches are observed in the simulation or and experiment. Further analysis on the stabilization mechanism was conducted. A distinction between the leading point (the most upstream point of the flame) and the stabilization point was made. The later was identified as the position with the maximum premixed heat release. This is in line with the stabilization mechanism proposed by Upatnieks et al. [A. Upatnieks, J. Driscoll, C. Rasmussen, S. Ceccio, Combust. Flame 138 (2004) 259-272]. (author)

  4. Large eddy simulation of a plane turbulent wall jet

    NASA Astrophysics Data System (ADS)

    Dejoan, A.; Leschziner, M. A.

    2005-02-01

    The mean-flow and turbulence properties of a plane wall jet, developing in a stagnant environment, are studied by means of large eddy simulation. The Reynolds number, based on the inlet velocity Uo and the slot height b, is Re=9600, corresponding to recent well-resolved laser Doppler velocimetry and pulsed hot wire measurements of Eriksson et al. The relatively low Reynolds number and the high numerical resolution adopted (8.4 million nodes) allow all scales larger than about 10 Kolmogorov lengths to be captured. Of particular interest are the budgets for turbulence energy and Reynolds stresses, not available from experiments, and their inclusion sheds light on the processes which play a role in the interaction between the near-wall layer and the outer shear layer. Profiles of velocity and turbulent Reynolds stresses in the self-similar region are presented in inner and outer scaling and compared to experimental data. Included are further results for skin friction, evolution of integral quantities and third-order moments. Good agreement is observed, in most respects, between the simulated flow and the corresponding experiment. The budgets demonstrate, among a number of mechanisms, the decisive role played by turbulent transport (via the third moments) in the interaction region, across which information is transmitted between the near-wall layer and the outer layer.

  5. Large eddy simulation of boundary layer flow under cnoidal waves

    NASA Astrophysics Data System (ADS)

    Li, Yin-Jun; Chen, Jiang-Bo; Zhou, Ji-Fu; Zhang, Qiang

    2016-02-01

    Water waves in coastal areas are generally nonlinear, exhibiting asymmetric velocity profiles with different amplitudes of crest and trough. The behaviors of the boundary layer under asymmetric waves are of great significance for sediment transport in natural circumstances. While previous studies have mainly focused on linear or symmetric waves, asymmetric wave-induced flows remain unclear, particularly in the flow regime with high Reynolds numbers. Taking cnoidal wave as a typical example of asymmetric waves, we propose to use an infinite immersed plate oscillating cnoidally in its own plane in quiescent water to simulate asymmetric wave boundary layer. A large eddy simulation approach with Smagorinsky subgrid model is adopted to investigate the flow characteristics of the boundary layer. It is verified that the model well reproduces experimental and theoretical results. Then a series of numerical experiments are carried out to study the boundary layer beneath cnoidal waves from laminar to fully developed turbulent regimes at high Reynolds numbers, larger than ever studied before. Results of velocity profile, wall shear stress, friction coefficient, phase lead between velocity and wall shear stress, and the boundary layer thickness are obtained. The dependencies of these boundary layer properties on the asymmetric degree and Reynolds number are discussed in detail.

  6. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  7. Large eddy simulation for aerodynamics: status and perspectives.

    PubMed

    Sagaut, Pierre; Deck, Sébastien

    2009-07-28

    The present paper provides an up-to-date survey of the use of large eddy simulation (LES) and sequels for engineering applications related to aerodynamics. Most recent landmark achievements are presented. Two categories of problem may be distinguished whether the location of separation is triggered by the geometry or not. In the first case, LES can be considered as a mature technique and recent hybrid Reynolds-averaged Navier-Stokes (RANS)-LES methods do not allow for a significant increase in terms of geometrical complexity and/or Reynolds number with respect to classical LES. When attached boundary layers have a significant impact on the global flow dynamics, the use of hybrid RANS-LES remains the principal strategy to reduce computational cost compared to LES. Another striking observation is that the level of validation is most of the time restricted to time-averaged global quantities, a detailed analysis of the flow unsteadiness being missing. Therefore, a clear need for detailed validation in the near future is identified. To this end, new issues, such as uncertainty and error quantification and modelling, will be of major importance. First results dealing with uncertainty modelling in unsteady turbulent flow simulation are presented.

  8. Surface detection, meshing and analysis during large molecular dynamics simulations

    SciTech Connect

    Dupuy, L M; Rudd, R E

    2005-08-01

    New techniques are presented for the detection and analysis of surfaces and interfaces in atomistic simulations of solids. Atomistic and other particle-based simulations have no inherent notion of a surface, only atomic positions and interactions. The algorithms we introduce here provide an unambiguous means to determine which atoms constitute the surface, and the list of surface atoms and a tessellation (meshing) of the surface are determined simultaneously. The algorithms have been implemented and demonstrated to run automatically (on the fly) in a large-scale parallel molecular dynamics (MD) code on a supercomputer. We demonstrate the validity of the method in three applications in which the surfaces and interfaces evolve: void surfaces in ductile fracture, the surface morphology due to significant plastic deformation of a nanoscale metal plate, and the interfaces (grain boundaries) and void surfaces in a nanoscale polycrystalline system undergoing ductile failure. The technique is found to be quite robust, even when the topology of the surfaces changes as in the case of void coalescence where two surfaces merge into one. It is found to add negligible computational overhead to an MD code, and is much less expensive than other techniques such as the solvent-accessible surface.

  9. Numerical techniques for large cosmological N-body simulations

    NASA Technical Reports Server (NTRS)

    Efstathiou, G.; Davis, M.; White, S. D. M.; Frenk, C. S.

    1985-01-01

    Techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe are described and compared. The accuracy of the forces derived from several commonly used particle mesh schemes is examined, showing how submesh resolution can be achieved by including short-range forces between particles by direct summation techniques. The time integration of the equations of motion is discussed, and the accuracy of the codes for various choices of 'time' variable and time step is tested by considering energy conservation as well as by direct analysis of particle trajectories. Methods for generating initial particle positions and velocities corresponding to a growing mode representation of a specified power spectrum of linear density fluctuations are described. The effects of force resolution are studied and different simulation schemes are compared. An algorithm is implemented for generating initial conditions by varying the number of particles, the initial amplitude of density fluctuations, and the initial peculiar velocity field.

  10. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    SciTech Connect

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.

  11. Finite volume simulation for convective heat transfer in wavy channels

    NASA Astrophysics Data System (ADS)

    Aslan, Erman; Taymaz, Imdat; Islamoglu, Yasar

    2016-03-01

    The convective heat transfer characteristics for a periodic wavy channel have been investigated experimentally and numerically. Finite volume method was used in numerical study. Experiment results are used for validation the numerical results. Studies were conducted for air flow conditions where contact angle is 30°, and uniform heat flux 616 W/m2 is applied as the thermal boundary conditions. Reynolds number ( Re) is varied from 2000 to 11,000 and Prandtl number ( Pr) is taken 0.7. Nusselt number ( Nu), Colburn factor ( j), friction factor ( f) and goodness factor ( j/ f) against Reynolds number have been studied. The effects of the wave geometry and minimum channel height have been discussed. Thus, the best performance of flow and heat transfer characterization was determined through wavy channels. Additionally, it was determined that the computed values of convective heat transfer coefficients are in good correlation with experimental results for the converging diverging channel. Therefore, numerical results can be used for these channel geometries instead of experimental results.

  12. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  13. A pyramid-based approach to visual exploration of a large volume of vehicle trajectory data

    NASA Astrophysics Data System (ADS)

    Sun, Jing; Li, Xiang

    2012-12-01

    Advances in positioning and wireless communicating technologies make it possible to collect large volumes of trajectory data of moving vehicles in a fast and convenient fashion. These data can be applied to traffic studies. Behind this application, a methodological issue that still requires particular attention is the way these data should be spatially visualized. Trajectory data physically consists of a large number of positioning points. With the dramatic increase of data volume, it becomes a challenge to display and explore these data. Existing commercial software often employs vector-based indexing structures to facilitate the display of a large volume of points, but their performance downgrades quickly when the number of points is very large, for example, tens of millions. In this paper, a pyramid-based approach is proposed. A pyramid method initially is invented to facilitate the display of raster images through the tradeoff between storage space and display time. A pyramid is a set of images at different levels with different resolutions. In this paper, we convert vector-based point data into raster data, and build a gridbased indexing structure in a 2D plane. Then, an image pyramid is built. Moreover, at the same level of a pyramid, image is segmented into mosaics with respect to the requirements of data storage and management. Algorithms or procedures on grid-based indexing structure, image pyramid, image segmentation, and visualization operations are given in this paper. A case study with taxi trajectory data in Shanghai is conducted. Results demonstrate that the proposed method outperforms the existing commercial software.

  14. Very Large Area/Volume Microwave ECR Plasma and Ion Source

    NASA Technical Reports Server (NTRS)

    Foster, John E. (Inventor); Patterson, Michael J. (Inventor)

    2009-01-01

    The present invention is an apparatus and method for producing very large area and large volume plasmas. The invention utilizes electron cyclotron resonances in conjunction with permanent magnets to produce dense, uniform plasmas for long life ion thruster applications or for plasma processing applications such as etching, deposition, ion milling and ion implantation. The large area source is at least five times larger than the 12-inch wafers being processed to date. Its rectangular shape makes it easier to accommodate to materials processing than sources that are circular in shape. The source itself represents the largest ECR ion source built to date. It is electrodeless and does not utilize electromagnets to generate the ECR magnetic circuit, nor does it make use of windows.

  15. The complex aerodynamic footprint of desert locusts revealed by large-volume tomographic particle image velocimetry.

    PubMed

    Henningsson, Per; Michaelis, Dirk; Nakata, Toshiyuki; Schanz, Daniel; Geisler, Reinhard; Schröder, Andreas; Bomphrey, Richard J

    2015-07-01

    Particle image velocimetry has been the preferred experimental technique with which to study the aerodynamics of animal flight for over a decade. In that time, hardware has become more accessible and the software has progressed from the acquisition of planes through the flow field to the reconstruction of small volumetric measurements. Until now, it has not been possible to capture large volumes that incorporate the full wavelength of the aerodynamic track left behind during a complete wingbeat cycle. Here, we use a unique apparatus to acquire the first instantaneous wake volume of a flying animal's entire wingbeat. We confirm the presence of wake deformation behind desert locusts and quantify the effect of that deformation on estimates of aerodynamic force and the efficiency of lift generation. We present previously undescribed vortex wake phenomena, including entrainment around the wing-tip vortices of a set of secondary vortices borne of Kelvin-Helmholtz instability in the shear layer behind the flapping wings. PMID:26040598

  16. The complex aerodynamic footprint of desert locusts revealed by large-volume tomographic particle image velocimetry.

    PubMed

    Henningsson, Per; Michaelis, Dirk; Nakata, Toshiyuki; Schanz, Daniel; Geisler, Reinhard; Schröder, Andreas; Bomphrey, Richard J

    2015-07-01

    Particle image velocimetry has been the preferred experimental technique with which to study the aerodynamics of animal flight for over a decade. In that time, hardware has become more accessible and the software has progressed from the acquisition of planes through the flow field to the reconstruction of small volumetric measurements. Until now, it has not been possible to capture large volumes that incorporate the full wavelength of the aerodynamic track left behind during a complete wingbeat cycle. Here, we use a unique apparatus to acquire the first instantaneous wake volume of a flying animal's entire wingbeat. We confirm the presence of wake deformation behind desert locusts and quantify the effect of that deformation on estimates of aerodynamic force and the efficiency of lift generation. We present previously undescribed vortex wake phenomena, including entrainment around the wing-tip vortices of a set of secondary vortices borne of Kelvin-Helmholtz instability in the shear layer behind the flapping wings.

  17. Mechanically Cooled Large-Volume Germanium Detector Systems for Neclear Explosion Monitoring DOENA27323-2

    SciTech Connect

    Hull, E.L.

    2006-10-30

    Compact maintenance free mechanical cooling systems are being developed to operate large volume high-resolution gamma-ray detectors for field applications. To accomplish this we are utilizing a newly available generation of Stirling-cycle mechanical coolers to operate the very largest volume germanium detectors with no maintenance. The user will be able to leave these systems unplugged on the shelf until needed. The maintenance-free operating lifetime of these detector systems will exceed 5 years. Three important factors affect the operation of mechanically cooled germanium detectors: temperature, vacuum, and vibration. These factors will be studied in the laboratory at the most fundamental levels to insure a solid understanding of the physical limitations each factor places on a practical mechanically cooled germanium detector system. Using this knowledge, mechanically cooled germanium detector prototype systems will be designed and fabricated.

  18. The complex aerodynamic footprint of desert locusts revealed by large-volume tomographic particle image velocimetry

    PubMed Central

    Henningsson, Per; Michaelis, Dirk; Nakata, Toshiyuki; Schanz, Daniel; Geisler, Reinhard; Schröder, Andreas; Bomphrey, Richard J.

    2015-01-01

    Particle image velocimetry has been the preferred experimental technique with which to study the aerodynamics of animal flight for over a decade. In that time, hardware has become more accessible and the software has progressed from the acquisition of planes through the flow field to the reconstruction of small volumetric measurements. Until now, it has not been possible to capture large volumes that incorporate the full wavelength of the aerodynamic track left behind during a complete wingbeat cycle. Here, we use a unique apparatus to acquire the first instantaneous wake volume of a flying animal's entire wingbeat. We confirm the presence of wake deformation behind desert locusts and quantify the effect of that deformation on estimates of aerodynamic force and the efficiency of lift generation. We present previously undescribed vortex wake phenomena, including entrainment around the wing-tip vortices of a set of secondary vortices borne of Kelvin–Helmholtz instability in the shear layer behind the flapping wings. PMID:26040598

  19. Generation of large volume hydrostatic pressure to 8 GPa for ultrasonic studies

    NASA Astrophysics Data System (ADS)

    Kozuki, Yasushi; Yoneda, Akira; Fujimura, Akio; Sawamoto, Hiroshi; Kumazawa, Mineo

    1986-09-01

    The design and performance of a liquid-solid hybrid cell to generate high hydrostatic pressures in a relatively large volume (for use in measurements of the pressure dependence of the physical properties of materials) are reported. A 4:1 methanol-ethanol mixture is employed in 12-mm-side and 20-mm-side versions of an eight-cubic-anvil apparatus driven by a 10-kt press. Pressures up to 8 GPa are obtained safely in a 16-cu cm volume by applying uniaxial force of 3 kt. The cell is used to obtain measurements of the velocity of ultrasonic waves in fused quartz: the experimental setup is described, and sample results are presented graphically.

  20. Large-Eddy Simulation of Supersonic Axisymmetric Bluff Body Wakes

    NASA Astrophysics Data System (ADS)

    Tourbier, D.; Fasel, H. F.

    1997-11-01

    The time-dependent behavior of the turbulent wake of an axisymmetric bluff body is investigated using Large-Eddy Simulation (LES). The axisymmetric body is aligned with a supersonic free stream at a Mach number of M_∞ = 2.46 . It has been shown previously that this flow field is subject to an absolute instability for global Reynolds numbers higher than ReD = 30,000 . As a result of this instability large structures are present in the near wake and render the flow field highly unsteady. These structures have a strong influence on the global behavior of the flow field and thus on the overall drag of the body. Commonly used turbulence models (e.g. in RANS) fail to accurately describe the flow field and are inadequate for drag prediction. Preliminary LES calculations for global Reynolds numbers up to ReD = 400,000 using a Smagorinsky type subgrid-scale model with a fixed constant have shown qualitative agreement with experimental observations in terms of pressure distribution along the blunt base and magnitude of rms values in the wake. However, the model is too dissipative for most parts of the free shear layer emanating from the corner of the base and the evolution of structures in the close vicinity of the corner is suppressed. Therefore, a dynamic subgrid-scale model was implemented into the code and tested to evaluate the performance of the model for this flow configuration.

  1. Large-eddy simulations of a fully appended submarine model

    NASA Astrophysics Data System (ADS)

    Posa, Antonio; Balaras, Elias

    2013-11-01

    In the present study we report large-eddy simulations (LES) the flow around an idealized submarine geometry (DARPA SUBOFF) at a Reynolds number -based on the model length and free stream velocity- equal to 1.2 million. A finite-difference formulation on a cylindrical coordinate grid of 2.8 billion nodes is utilized, and boundary conditions on the submarine model are imposed using an immersed-boundary technique. The boundary layers are ``tripped'' near the leading edge to mimic the conditions in experiments reported in the literature. Our computations resolve the detailed dynamics of the turbulent boundary layers on the suboff body as well as their interaction with the large scale vortices generated at the sail and fin junctions. The time-averaged velocity profiles in the intermediate wake reach self-similarity, except for the region affected by the wake of the sail. The comparison with the exponential law from the experimental study in the literature is satisfactory. It is also confirmed that the flow coming from the fins causes a deviation from the self-similar profile, which is more evident than in the experiments. Details on the turbulent boundary layer on the surface of the body will be provided, showing a good qualitative agreement with the results in the literature. Supported by ONR Grant N000141110455, monitored by Dr. Ki-Han Kim.

  2. Large eddy simulation modelling of combustion for propulsion applications.

    PubMed

    Fureby, C

    2009-07-28

    Predictive modelling of turbulent combustion is important for the development of air-breathing engines, internal combustion engines, furnaces and for power generation. Significant advances in modelling non-reactive turbulent flows are now possible with the development of large eddy simulation (LES), in which the large energetic scales of the flow are resolved on the grid while modelling the effects of the small scales. Here, we discuss the use of combustion LES in predictive modelling of propulsion applications such as gas turbine, ramjet and scramjet engines. The LES models used are described in some detail and are validated against laboratory data-of which results from two cases are presented. These validated LES models are then applied to an annular multi-burner gas turbine combustor and a simplified scramjet combustor, for which some additional experimental data are available. For these cases, good agreement with the available reference data is obtained, and the LES predictions are used to elucidate the flow physics in such devices to further enhance our knowledge of these propulsion systems. Particular attention is focused on the influence of the combustion chemistry, turbulence-chemistry interaction, self-ignition, flame holding burner-to-burner interactions and combustion oscillations. PMID:19531515

  3. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not

  4. A Novel Technique for Endovascular Removal of Large Volume Right Atrial Tumor Thrombus

    SciTech Connect

    Nickel, Barbara; McClure, Timothy Moriarty, John

    2015-08-15

    Venous thromboembolic disease is a significant cause of morbidity and mortality, particularly in the setting of large volume pulmonary embolism. Thrombolytic therapy has been shown to be a successful treatment modality; however, its use somewhat limited due to the risk of hemorrhage and potential for distal embolization in the setting of large mobile thrombi. In patients where either thrombolysis is contraindicated or unsuccessful, and conventional therapies prove inadequate, surgical thrombectomy may be considered. We present a case of percutaneous endovascular extraction of a large mobile mass extending from the inferior vena cava into the right atrium using the Angiovac device, a venovenous bypass system designed for high-volume aspiration of undesired endovascular material. Standard endovascular methods for removal of cancer-associated thrombus, such as catheter-directed lysis, maceration, and exclusion, may prove inadequate in the setting of underlying tumor thrombus. Where conventional endovascular methods either fail or are unsuitable, endovascular thrombectomy with the Angiovac device may be a useful and safe minimally invasive alternative to open resection.

  5. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.

    1991-01-01

    This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.

  6. Large-eddy simulation for the prediction of supersonic rectangular jet noise

    NASA Astrophysics Data System (ADS)

    Nichols, Joseph W.; Ham, Frank E.; Lele, Sanjiva K.; Bridges, James E.

    2011-11-01

    We investigate the noise from isothermal and heated under-expanded supersonic turbulent jets issuing from a rectangular nozzle of aspect ratio 4:1 using high-fidelity unstructured large-eddy simulation (LES) and acoustic projection based on the Ffowcs-Williams Hawkings (FWH) equations. The nozzle/flow interaction is directly included by simulating the flow in and around the nozzle in addition to the jet plume downstream. A grid resolution study is performed and results are shown for unstructured meshes containing up to 300 million control volumes, generated by a massively parallel code scaled to as many as 65,536 processors. Validated against laboratory measurements using a nozzle of precisely the same geometry, we find that mesh isotropy is a key factor in determining the quality of the far-field aeroacoustic predictions. The full flow fields produced by the simulation, in conjunction with particle image velocimetry (PIV) data measured from experiment, allow for a detailed examination of the interaction of large-scale coherent flow features and the resultant far-field noise, and its subsequent modification in the presence of heating. Supported by NASA grant NNX07AC94A and PSAAP, with computational resources from a DoD HPCMP CAP-2 project.

  7. Cryogenic loading of large volume presses for high-pressure experimentation and synthesis of novel materials

    SciTech Connect

    Lipp, M J; Evans, W J; Yoo, C S

    2005-01-21

    We present an efficient easily implemented method for loading cryogenic fluids in a large volume press. We specifically apply this method to the high-pressure synthesis of an extended solid derived from CO using a Paris-Edinburgh cell. This method employs cryogenic cooling of Bridgman type WC anvils well insulated from other press components, condensation of the load gas within a brass annulus surrounding the gasket between the Bridgman anvils. We demonstrate the viability of the described approach by synthesizing macroscopic amounts (several milligrams) of polymeric CO-derived material, which were recovered to ambient conditions after compression of pure CO to 5 GPa or above.

  8. Large Volume, Optical and Opto-Mechanical Metrology Techniques for ISIM on JWST

    NASA Technical Reports Server (NTRS)

    Hadjimichael, Theo

    2015-01-01

    The final, flight build of the Integrated Science Instrument Module (ISIM) element of the James Webb Space Telescope is the culmination of years of work across many disciplines and partners. This paper covers the large volume, ambient, optical and opto-mechanical metrology techniques used to verify the mechanical integration of the flight instruments in ISIM, including optical pupil alignment. We present an overview of ISIM's integration and test program, which is in progress, with an emphasis on alignment and optical performance verification. This work is performed at NASA Goddard Space Flight Center, in close collaboration with the European Space Agency, the Canadian Space Agency, and the Mid-Infrared Instrument European Consortium.

  9. GMP cryopreservation of large volumes of cells for regenerative medicine: active control of the freezing process.

    PubMed

    Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Fuller, Barry; Gibbons, Stephanie; Morris, G John

    2014-09-01

    Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to -60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze-viabilities at 93.4% ± 7.4%, viable cell numbers at 14.3 ± 1.7 million nuclei/mL alginate, and protein secretion at 10.5 ± 1.7

  10. GMP Cryopreservation of Large Volumes of Cells for Regenerative Medicine: Active Control of the Freezing Process

    PubMed Central

    Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Gibbons, Stephanie; Morris, G. John

    2014-01-01

    Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to −60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze—viabilities at 93.4%±7.4%, viable cell numbers at 14.3±1.7 million nuclei/mL alginate, and protein secretion at 10.5±1.7

  11. GMP cryopreservation of large volumes of cells for regenerative medicine: active control of the freezing process.

    PubMed

    Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Fuller, Barry; Gibbons, Stephanie; Morris, G John

    2014-09-01

    Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to -60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze-viabilities at 93.4% ± 7.4%, viable cell numbers at 14.3 ± 1.7 million nuclei/mL alginate, and protein secretion at 10.5 ± 1.7

  12. Large eddy simulation subgrid model for soot prediction

    NASA Astrophysics Data System (ADS)

    El-Asrag, Hossam Abd El-Raouf Mostafa

    Soot prediction in realistic systems is one of the most challenging problems in theoretical and applied combustion. Soot formation as a chemical process is very complicated and not fully understood. The major difficulty stems from the chemical complexity of the soot formation process as well as its strong coupling with the other thermochemical and fluid processes that occur simultaneously. Soot is a major byproduct of incomplete combustion, having a strong impact on the environment as well as the combustion efficiency. Therefore, innovative methods is needed to predict soot in realistic configurations in an accurate and yet computationally efficient way. In the current study, a new soot formation subgrid model is developed and reported here. The new model is designed to be used within the context of the Large Eddy Simulation (LES) framework, combined with Linear Eddy Mixing (LEM) as a subgrid combustion model. The final model can be applied equally to premixed and non-premixed flames over any required geometry and flow conditions in the free, the transition, and the continuum regimes. The soot dynamics is predicted using a Method of Moments approach with Lagrangian Interpolative Closure (MOMIC) for the fractional moments. Since no prior knowledge of the particles distribution is required, the model is generally applicable. The current model accounts for the basic soot transport phenomena as transport by molecular diffusion and Thermophoretic forces. The model is first validated against experimental results for non-sooting swirling non-premixed and partially premixed flames. Next, a set of canonical premixed sooting flames are simulated, where the effect of turbulence, binary diffusivity and C/O ratio on soot formation are studied. Finally, the model is validated against a non-premixed jet sooting flame. The effect of the flame structure on the different soot formation stages as well as the particle size distribution is described. Good results are predicted with

  13. Simulation of preburner sprays, volumes 1 and 2

    NASA Technical Reports Server (NTRS)

    Hardalupas, Y.; Whitelaw, J. H.

    1993-01-01

    The present study considered characteristics of sprays under a variety of conditions. Control of these sprays is important as the spray details can control both rocket combustion stability and efficiency. Under the present study Imperial College considered the following: (1) Measurement of the size and rate of spread of the sprays produced by single coaxial airblast nozzles with axial gaseous stream. The local size, velocity, and flux characteristics for a wide range of gas and liquid flowrates were measured, and the results were correlated with the conditions of the spray at the nozzle exit. (2) Examination of the effect of the geometry of single coaxial airblast atomizers on spray characteristics. The gas and liquid tube diameters were varied over a range of values, the liquid tube recess was varied, and the shape of the exit of the gaseous jet was varied from straight to converging. (3) Quantification of the effect of swirl in the gaseous stream on the spray characteristics produced by single coaxial airblast nozzles. (4) Quantification of the effect of reatomization by impingement of the spray on a flat disc positioned around 200 mm from the nozzle exit. This models spray impingement on the turbopump dome during the startup process of the preburner of the SSME. (5) Study of the interaction between multiple sprays without and with swirl in their gaseous stream. The spray characteristics of single nozzles were compared with that of three identical nozzles with their axis at a small distance from each other. This study simulates the sprays in the preburner of the SSME, where there are around 260 elements on the faceplate of the combustion chamber. (6) Design an experimental facility to study the characteristics of sprays at high pressure conditions and at supercritical pressure and temperature for the gas but supercritical pressure and subcritical temperature for the liquid.

  14. Film cooling from inclined cylindrical holes using large eddy simulations

    NASA Astrophysics Data System (ADS)

    Peet, Yulia V.

    2006-12-01

    The goal of the present study is to investigate numerically the physics of the flow, which occurs during the film cooling from inclined cylindrical holes, Film cooling is a technique used in gas turbine industry to reduce heat fluxes to the turbine blade surface. Large Eddy Simulation (LES) is performed modeling a realistic film cooling configuration, which consists of a large stagnation-type reservoir, feeding an array of discrete cooling holes (film holes) flowing into a flat plate turbulent boundary layer. Special computational methodology is developed for this problem, involving coupled simulations using multiple computational codes. A fully compressible LES code is used in the area above the flat plate, while a low Mach number LES code is employed in the plenum and film holes. The motivation for using different codes comes from the essential difference in the nature of the flow in these different regions. Flowfield is analyzed inside the plenum, film hole and a crossflow region. Flow inside the plenum is stagnating, except for the region close to the exit, where it accelerates rapidly to turn into the hole. The sharp radius of turning at the trailing edge of the plenum pipe connection causes the flow to separate from the downstream wall of the film hole. After coolant injection occurs, a complex flowfield is formed consisting of coherent vortical structures responsible for bringing hot crossflow fluid in contact with the walls of either the film hole or the blade, thus reducing cooling protection. Mean velocity and turbulent statistics are compared to experimental measurements, yielding good agreement for the mean flowfield and satisfactory agreement for the turbulence quantities. LES results are used to assess the applicability of basic assumptions of conventional eddy viscosity turbulence models used with Reynolds-averaged (RANS) approach, namely the isotropy of an eddy viscosity and thermal diffusivity. It is shown here that these assumptions do not hold

  15. A scale down process for the development of large volume cryopreservation.

    PubMed

    Kilbride, Peter; Morris, G John; Milne, Stuart; Fuller, Barry; Skepper, Jeremy; Selden, Clare

    2014-12-01

    The process of ice formation and propagation during cryopreservation impacts on the post-thaw outcome for a sample. Two processes, either network solidification or progressive solidification, can dominate the water-ice phase transition with network solidification typically present in small sample cryo-straws or cryo-vials. Progressive solidification is more often observed in larger volumes or environmental freezing. These different ice phase progressions could have a significant impact on cryopreservation in scale-up and larger volume cryo-banking protocols necessitating their study when considering cell therapy applications. This study determines the impact of these different processes on alginate encapsulated liver spheroids (ELS) as a model system during cryopreservation, and develops a method to replicate these differences in an economical manner. It was found in the current studies that progressive solidification resulted in fewer, but proportionally more viable cells 24h post-thaw compared with network solidification. The differences between the groups diminished at later time points post-thaw as cells recovered the ability to undertake cell division, with no statistically significant differences seen by either 48 h or 72 h in recovery cultures. Thus progressive solidification itself should not prove a significant hurdle in the search for successful cryopreservation in large volumes. However, some small but significant differences were noted in total viable cell recoveries and functional assessments between samples cooled with either progressive or network solidification, and these require further investigation.

  16. A scale down process for the development of large volume cryopreservation☆

    PubMed Central

    Kilbride, Peter; Morris, G. John; Milne, Stuart; Fuller, Barry; Skepper, Jeremy; Selden, Clare

    2014-01-01

    The process of ice formation and propagation during cryopreservation impacts on the post-thaw outcome for a sample. Two processes, either network solidification or progressive solidification, can dominate the water–ice phase transition with network solidification typically present in small sample cryo-straws or cryo-vials. Progressive solidification is more often observed in larger volumes or environmental freezing. These different ice phase progressions could have a significant impact on cryopreservation in scale-up and larger volume cryo-banking protocols necessitating their study when considering cell therapy applications. This study determines the impact of these different processes on alginate encapsulated liver spheroids (ELS) as a model system during cryopreservation, and develops a method to replicate these differences in an economical manner. It was found in the current studies that progressive solidification resulted in fewer, but proportionally more viable cells 24 h post-thaw compared with network solidification. The differences between the groups diminished at later time points post-thaw as cells recovered the ability to undertake cell division, with no statistically significant differences seen by either 48 h or 72 h in recovery cultures. Thus progressive solidification itself should not prove a significant hurdle in the search for successful cryopreservation in large volumes. However, some small but significant differences were noted in total viable cell recoveries and functional assessments between samples cooled with either progressive or network solidification, and these require further investigation. PMID:25219980

  17. Modeling dilute sediment suspension using large-eddy simulation with a dynamic mixed model

    NASA Astrophysics Data System (ADS)

    Chou, Yi-Ju; Fringer, Oliver B.

    2008-11-01

    Transport of suspended sediment in high Reynolds number channel flows [Re=O(600 000)] is simulated using large-eddy simulation along with a dynamic-mixed model (DMM). Because the modeled sediment concentration is low and the bulk Stokes' number (Stb) is small during the simulation, the sediment concentration is calculated through the use of the Eulerian approach. In order to employ the DMM for the suspended sediment, we formulate a generalized bottom boundary condition in a finite-volume formulation that accounts for sediment flux from the bed without requiring specific details of the underlying turbulence model. This enables the use of the pickup function without requiring any assumptions about the behavior of the eddy viscosity. Using our new boundary condition, simulations indicate that the resolved component of the vertical flux is one order of magnitude greater than the resolved subfilter-scale flux, which is in turn one order of magnitude greater than the eddy-diffusive flux. Analysis of the behavior of the suspended sediment above the bed indicates the existence of three basic time scales that arise due to varying degrees of competition between the upward turbulent flux and downward settling flux. Instantaneous sediment concentration and velocity fields indicate that streamwise vortices account for a bulk of the resolved flux of sediment from the bed.

  18. Large-eddy simulation of oil slicks from deep water blowouts

    NASA Astrophysics Data System (ADS)

    Yang, Di; Chamecki, Marcelo; Meneveau, Charles

    2013-11-01

    Deep water blowouts generate plumes of oil droplets and gas bubbles that rise through, and interact with various layers of the ocean. When plumes reach the ocean mixed layer (OML), the interactions among plume, Ekman Spiral and Langmuir turbulence strongly affect the final rates of dilution and bio-degradation. The present study aims at developing a large-eddy simulation (LES) capability for the study of the physical distribution and dispersion of petroleum (oil and gas) under the action of physical oceanographic processes in the OML. In the current LES, the velocity and temperature fields are simulated using a hybrid pseudo-spectral and finite-difference scheme; the oil/gas field is described by an Eulerian concentration field and it is simulated using a bounded finite-volume scheme. A variety of subgrid-scale models for the flow solver are implemented and tested. The LES capability is then applied to the simulation of oil plume dispersion in the OML, which is initially released from a point source below the thermocline. Graphical visualization of the LES results shows surface oil slick distribution consistent with the satellite and aerial images of surface oil slicks reported in the literature. Funding from the GoMRI RFP-II is gratefully acknowledged.

  19. Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-07-12

    This work studies the performance and scalability characteristics of"hybrid'"parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  20. MPI-hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-03-20

    This work studies the performance and scalability characteristics of"hybrid'" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  1. Points based reconstruction and rendering of 3D shapes from large volume dataset

    NASA Astrophysics Data System (ADS)

    Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming

    2003-05-01

    In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.

  2. Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-06-14

    This work studies the performance and scalability characteristics of"hybrid" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  3. Hybrid Parallelism for Volume Rendering on Large, Multi- and Many-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2011-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells. The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.

  4. Pathways of deep cyclones associated with large volume changes (LVCs) and major Baltic inflows (MBIs)

    NASA Astrophysics Data System (ADS)

    Lehmann, Andreas; Höflich, Katharina; Post, Piia; Myrberg, Kai

    2016-04-01

    Large volume changes (LVCs) and major Baltic inflows (MBIs) are essential processes for the water exchange and renewal of the deep stagnant deep water in the Baltic Sea deep basins. MBIs are considered as subset of LVCs transporting with the large water volume a big amount of highly saline and oxygenated water into the Baltic Sea. Since the early 1980s the frequency of MBIs has dropped drastically from 5 to 7 events to only one inflow per decade, and long lasting periods without MBIs became the usual state. Only in January 1993, 2003 and December 2014 MBIs occurred that were able to interrupt the stagnation periods in the deep basins of the Baltic Sea. However, in spite of the decreasing frequency of MBIs, there is no obvious decrease of LVCs. Large volume changes have been calculated for the period 1887-2014 filtering daily time series of Landsort sea surface elevation anomalies. The Landsort sea level is known to reflect the mean sea level of the Baltic Sea very well. Thus, LVCs can be calculated from the mean sea level variations. The cases with local minimum and maximum difference resulting of at least 100 km³ of water volume change have been chosen for a closer study of characteristic pathways of deep cyclones. The average duration of a LVC is about 40 days. During this time, 5-6 deep cyclones will move along characteristic storm tracks. We obtained three main routes of deep cyclones which were associated with LVCs, but also with the climatology. One is approaching from the west at about 58-62°N, passing the northern North Sea, Oslo, Sweden and the Island of Gotland, while a second, less frequent one, is approaching from the west at about 65°N, crossing Scandinavia south-eastwards passing the Sea of Bothnia and entering Finland. A third very frequent one is entering the study area north of Scotland turning north-eastwards along the northern coast of Scandinavia. Thus, the conditions for a LVC to happen are a temporal clustering of deep cyclones in certain

  5. Final Report: "Large-Eddy Simulation of Anisotropic MHD Turbulence"

    SciTech Connect

    Zikanov, Oleg

    2008-06-23

    To acquire better understanding of turbulence in flows of liquid metals and other electrically conducting fluids in the presence of steady magnetic fields and to develop an accurate and physically adequate LES (large-eddy simulation) model for such flows. The scientific objectives formulated in the project proposal have been fully completed. Several new directions were initiated and advanced in the course of work. Particular achievements include a detailed study of transformation of turbulence caused by the imposed magnetic field, development of an LES model that accurately reproduces this transformation, and solution of several fundamental questions of the interaction between the magnetic field and fluid flows. Eight papers have been published in respected peer-reviewed journals, with two more papers currently undergoing review, and one in preparation for submission. A post-doctoral researcher and a graduate student have been trained in the areas of MHD, turbulence research, and computational methods. Close collaboration ties have been established with the MHD research centers in Germany and Belgium.

  6. Large-eddy simulation of Hector the convector

    NASA Astrophysics Data System (ADS)

    Chaboureau, J.; Dauhut, T.; Escobar, J.; Mascart, P. J.

    2013-12-01

    A large-eddy simulation (LES) with a grid mesh of 100 m was performed for a Hector thunderstorm observed on 30 November 2005 over the Tiwi Islands, north of Australia. On that day, ice particles have been measured reaching 19 km altitude. An idealized setup was build based on an early morning sounding corresponding to 0930 local time with periodic boundary conditions. The LES developed similar overshooting updrafts penetrating the stratosphere that compare well with the observation. Much of the water injected in the form of ice particles sublimates in the lower stratosphere. A net hydration is found with a 20 % increase of water vapor. While moistening appears to be robust to the grid spacing used (100, 200, 400, 800 m), grid spacing on the order of 100 m may be necessary for a reliable estimate of hydration. The model setup could help testing the hydration estimate in the frame of a cloud-resolving model intercomparaison. (a) Vertical section of total water vapor across Hector at 1400 LST along the line show in Figure 2. (b) Zoom on the upper part of (a). (c) Backscatter ratio from lidar observation; figure taken from Corti et al. (2008). In (a) and (b) the red line represents the 380-K isentrope is shown with the red (blue) line in (a) and (b) ((c), respectively Water vapor mixing ratio (shading, ppmv) and horizontal wind (vector, m/s) at 19 km-altitude at (top) 1400 and (bottom) 1800 LST.

  7. Saturn: A large area x-ray simulation accelerator

    SciTech Connect

    Bloomquist, D.D.; Stinnett, R.W.; McDaniel, D.H.; Lee, J.R.; Sharpe, A.W.; Halbleib, J.A.; Schlitt, L.G.; Spence, P.W.; Corcoran, P.

    1987-01-01

    Saturn is the result of a major metamorphosis of the Particle Beam Fusion Accelerator-I (PBFA-I) from an ICF research facility to the large-area x-ray source of the Simulation Technology Laboratory (STL) project. Renamed Saturn, for its unique multiple-ring diode design, the facility is designed to take advantage of the numerous advances in pulsed power technology made by the ICF program in recent years and much of the existing PBFA-I support system. Saturn will include significant upgrades in the energy storage and pulse-forming sections. The 36 magnetically insulated transmission lines (MITLs) that provided power flow to the ion diode of PBFA-I were replaced by a system of vertical triplate water transmission lines. These lines are connected to three horizontal triplate disks in a water convolute section. Power will flow through an insulator stack into radial MITLs that drive the three-ring diode. Saturn is designed to operate with a maximum of 750 kJ coupled to the three-ring e-beam diode with a peak power of 25 TW to provide an x-ray exposure capability of 5 x 10/sup 12/ rads/s (Si) and 5 cal/g (Au) over 500 cm/sup 2/.

  8. Large-eddy simulation of crackle in heated supersonic jets

    NASA Astrophysics Data System (ADS)

    Nichols, Joseph W.; Lele, Sanjiva K.; Ham, Frank E.; Martens, Steve; Spyropoulos, John T.

    2012-11-01

    Crackle noise from heated supersonic jets is characterized by the presence of strong positive pressure impulses resulting in a strongly skewed far-field pressure signal (Ffowcs Williams et al., 1975). These strong positive pressure impulses are associated with N-shaped waveforms involving a shock-like compression, and thus is very annoying to observers when it occurs. In this talk, the origins of these N-shaped waveforms is investigated through high-fidelity large-eddy simulations (LES) applied to an over-expanded supersonic jet issuing from a faceted military-style nozzle. Two different levels of heating are considered. From the LES, we observe N-shaped waves associated with crackle to emerge directly from the jet turbulence. Furthermore, even at this extreme near-field location, we find that the emergent waves are already well-organized, having correlation over significant azimuthal distances. Computational resources were provided by a DoD HPCMP Challenge Project allocation at the ERDC and AFRL supercomputing centers.

  9. On the Computation of Sound by Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Streett, Craig L.; Sarkar, Sutanu

    1997-01-01

    The effect of the small scales on the source term in Lighthill's acoustic analogy is investigated, with the objective of determining the accuracy of large-eddy simulations when applied to studies of flow-generated sound. The distribution of the turbulent quadrupole is predicted accurately, if models that take into account the trace of the SGS stresses are used. Its spatial distribution is also correct, indicating that the low-wave-number (or frequency) part of the sound spectrum can be predicted well by LES. Filtering, however, removes the small-scale fluctuations that contribute significantly to the higher derivatives in space and time of Lighthill's stress tensor T(sub ij). The rms fluctuations of the filtered derivatives are substantially lower than those of the unfiltered quantities. The small scales, however, are not strongly correlated, and are not expected to contribute significantly to the far-field sound; separate modeling of the subgrid-scale density fluctuations might, however, be required in some configurations.

  10. Saturn: A large area X-ray simulation accelerator

    NASA Astrophysics Data System (ADS)

    Bloomquist, D. D.; Stinnett, R. W.; McDaniel, D. H.; Lee, J. R.; Sharpe, A. W.; Halbleib, J. A.; Schlitt, L. G.; Spence, P. W.; Corcoran, P.

    1987-06-01

    Saturn is the result of a major metamorphosis of the Particle Beam Fusion Accelerator-I (PBFA-I) from an ICF research facility to the large-area X-ray source of the Simulation Technology Laboratory (STL) project. Renamed Saturn, for its unique multiple-ring diode design, the facility is designed to take advantage of the numerous advances in pulsed power technology. Saturn will include significant upgrades in the energy storage and pulse-forming sections. The 36 magnetically insulated transmission lines (MITLs) that provided power flow to the ion diode of PBFA-I were replaced by a system of vertical triplate water transmission lines. These lines are connected to three horizontal triplate disks in a water convolute section. Power will flow through an insulator stack into radial MITLs that drive the three-ring diode. Saturn is designed to operate with a maximum of 750 kJ coupled to the three-ring e-beam diode with a peak power of 25 TW to provide an X-ray exposure capability of 5 x 10 rads/s (Si) and 5 cal/g (Au) over 500 cm.

  11. Large-eddy simulation of pulverized coal swirl jet flame

    NASA Astrophysics Data System (ADS)

    Muto, Masaya; Watanabe, Hiroaki; Kurose, Ryoichi; Komori, Satoru; Balusamy, Saravanan; Hochgreb, Simone

    2013-11-01

    Coal is an important energy resource for future demand for electricity, as coal reserves are much more abundant than those of other fossil fuels. In pulverized coal fired power plants, it is very important to improve the technology for the control of environmental pollutants such as nitrogen oxide, sulfur oxide and ash particles including unburned carbon. In order to achieve these requirements, understanding the pulverized coal combustion mechanism is necessary. However, the combustion process of the pulverized coal is not well clarified so far since pulverized coal combustion is a complicated phenomenon in which the maximum flame temperature exceeds 1500 degrees Celsius and some substances which can hardly be measured, for example, radical species and highly reactive solid particles are included. Accordingly, development of new combustion furnaces and burners requires high cost and takes a long period. In this study, a large-eddy simulation (LES) is applied to a pulverized coal combustion field and the results will be compared with the experiment. The results show that present LES can capture the general feature of the pulverized coal swirl jet flame.

  12. Large-eddy simulations of contrails in a turbulent atmosphere

    NASA Astrophysics Data System (ADS)

    Picot, J.; Paoli, R.; Thouron, O.; Cariolle, D.

    2014-11-01

    In this work, the evolution of contrails in the vortex and dissipation regimes is studied by means of fully three-dimensional large-eddy simulation (LES) coupled to a Lagrangian particle tracking method to treat the ice phase. This is the first paper where fine-scale atmospheric turbulence is generated and sustained by means of a stochastic forcing that mimics the properties of stably stratified turbulent flows as those occurring in the upper troposphere lower stratosphere. The initial flow-field is composed by the turbulent background flow and a wake flow obtained from separate LES of the jet regime. Atmospheric turbulence is the main driver of the wake instability and the structure of the resulting wake is sensitive to the intensity of the perturbations, primarily in the vertical direction. A stronger turbulence accelerates the onset of the instability, which results in shorter contrail decent and more effective mixing in the interior of the plume. However, the self-induced turbulence that is produced in the wake after the vortex break-up dominates over background turbulence at the end of the vortex regime and dominates the mixing with ambient air. This results in global microphysical characteristics such as ice mass and optical depth that are be slightly affected by the intensity of atmospheric turbulence. On the other hand, the background humidity and temperature have a first order effect on the survival of ice crystals and particle size distribution, which is in line with recent and ongoing studies in the literature.

  13. Large-Scale Atomistic Simulations of Material Failure

    DOE Data Explorer

    Abraham, Farid [IBM Almaden Research; Duchaineau, Mark [LLNL; Wirth, Brian [LLNL; Heidelberg,; Seager, Mark [LLNL; De La Rubia, Diaz [LLNL

    These simulations from 2000 examine the supersonic propagation of cracks and the formation of complex junction structures in metals. Eight simulations concerning brittle fracture, ductile failure, and shockless compression are available.

  14. New 3000 Ton Large Volume Multi-Anvil Apparatus Installed at the University of Western Ontario

    NASA Astrophysics Data System (ADS)

    Secco, R.; Yong, W.

    2012-12-01

    The 6-8 type multi-anvil apparatus has been widely adopted to study the high pressure and high temperature behavior of minerals, rocks and other materials ever since its original invention, because of its advantages of large sample volumes, quasi-hydrostatic pressures, and relatively uniform temperatures. Recently, a 3000-ton multi-anvil apparatus was installed in the Department of Earth Sciences at the University of Western Ontario. This new apparatus has the capability of fully automatic control of both pressure and temperature in a user defined path. This new 3000-ton press employs a split-cylinder module that accommodates WC cubes up to 32 mm in edge length, allowing large sample volumes. Calibration experiments for the 18/11 OEL/TEL configuration were performed with Cr2O3-doped MgO octahedra and pyrophyllite gaskets. Room temperature calibration was achieved using Bi I-II and III-V transitions at 2.55 GPa and 7.7 GPa respectively, and Sn I-II transition at 9.4 GPa. High temperature calibration at 1200°C is based on the quartz-coesite transition at 3.2 GPa, the garnet -perovskite transition in CaGeO3 at 5.9 GPa, and coesite-stishovite transition at 9.2 GPa. The sample volume can reach up to ~35 mm3 at pressures up to 10 GPa and temperatures over 2000°C, ideal for chemical synthesis of high pressure phases intended for subsequent analysis such as calorimetry.

  15. Adrenal suppression with inhaled budesonide and fluticasone propionate given by large volume spacer to asthmatic children.

    PubMed Central

    Clark, D. J.; Clark, R. A.; Lipworth, B. J.

    1996-01-01

    BACKGROUND: The aim of this study was to compare the systemic bioactivity of inhaled budesonide (B) and fluticasone propionate (F), each given by large volume spacer, on a microgram equivalent basis in asthmatic children. METHODS: Ten stable asthmatic children of mean age 11 years and forced expiratory volume in one second (FEV1) 81.6% predicted, who were receiving treatment with < or = 400 micrograms/day of inhaled corticosteroid, were studied in a placebo controlled single blind (investigator blind) randomised crossover design comparing single doses of inhaled budesonide and fluticasone propionate 400 micrograms, 800 micrograms, and 1250 micrograms. Doses were given at 20.00 hours with mouth rinsing and an overnight 12 hour urine sample was collected for estimation of free cortisol and creatinine excretion. RESULTS: The results of overnight 12 hour urinary cortisol output (nmol/12 hours) showed suppression with all doses of fluticasone propionate (as geometric means): F400 micrograms (11.99), F800 micrograms (6.49), F1250 micrograms (7.00) compared with placebo (24.43), whereas budesonide caused no suppression at any dose. A comparison of the drugs showed that there were differences at 800 micrograms and 1250 micrograms levels for urinary cortisol: B800 micrograms versus F800 micrograms (2.65-fold, 95% CI 1.26 to 5.58), B1250 micrograms versus F1250 micrograms (2.94-fold, 95% CI 1.67 to 5.15). The results for the cortisol/creatinine ratio were similar to that of urinary cortisol, with fluticasone causing suppression at all doses and with differences between the drugs at 800 micrograms and 1250 micrograms. CONCLUSIONS: Single doses of inhaled fluticasone produce greater systemic bioactivity than budesonide when given by large volume spacer on a microgram equivalent basis in asthmatic children. The systemic bioactivity of fluticasone, like budesonide, is due mainly to lung bioavailability. PMID:8984708

  16. Mechanically Cooled Large-Volume Germanium Detector Systems for Nuclear Explosion Monitoring

    SciTech Connect

    Hull, Ethan L.; Pehl, Richard H.; Lathrop, James R.; Martin, Gregory N.; Mashburn, R. B.; Miley, Harry S.; Aalseth, Craig E.; Hossbach, Todd W.; Bowyer, Ted W.

    2006-09-21

    Compact maintenance free mechanical cooling systems are being developed to operate large volume (~570 cm3, ~3 kg, 140% or larger) germanium detectors for field applications. We are using a new generation of Stirling-cycle mechanical coolers for operating the very largest volume germanium detectors with absolutely no maintenance or liquid nitrogen requirements. The user will be able to leave these systems unplugged on the shelf until needed. The flip of a switch will bring a system to life in ~1 hour for measurements. The maintenance-free operating lifetime of these detector systems will exceed five years. These features are necessary for remote long-duration liquid-nitrogen free deployment of large-volume germanium gamma-ray detector systems for Nuclear Explosion Monitoring (NEM). The Radionuclide Aerosol Sampler/Analyzer (RASA) will greatly benefit from the availability of such detectors by eliminating the need for liquid nitrogen at RASA sites while still allowing the very largest available germanium detectors to be utilized. These mechanically cooled germanium detector systems being developed here will provide the largest, most sensitive detectors possible for use with the RASA. To provide such systems, the appropriate technical fundamentals are being researched. Mechanical cooling of germanium detectors has historically been a difficult endeavor. The success or failure of mechanically cooled germanium detectors stems from three main technical issues: temperature, vacuum, and vibration. These factors affect one another. There is a particularly crucial relationship between vacuum and temperature. These factors will be experimentally studied both separately and together to insure a solid understanding of the physical limitations each factor places on a practical mechanically cooled germanium detector system for field use. Using this knowledge, a series of mechanically cooled germanium detector prototype systems are being designed and fabricated. Our collaborators

  17. Controlled ice nucleation--Is it really needed for large-volume sperm cryopreservation?

    PubMed

    Saragusty, Joseph; Osmers, Jan-Hendrik; Hildebrandt, Thomas Bernd

    2016-04-15

    Controlled ice nucleation (CIN) is an integral stage of slow freezing process when relatively large volumes (usually 1 mL or larger) of biological samples in suspension are involved. Without it, a sample will supercool to way below its melting point before ice crystals start forming, resulting in multiple damaging processes. In this study, we tested the hypothesis that when freezing large volumes by the directional freezing technique, a CIN stage is not needed. Semen samples collected from ten bulls were frozen in 2.5-mL HollowTubes in a split-sample manner with and without a CIN stage. Thawed samples were evaluated for viability, acrosome integrity, rate of normal morphology, and, using computer-aided sperm analysis system, for a wide range of motility parameters that were also evaluated after 3 hours of incubation at 37 °C. Analysis of the results found no difference between freezing with and without CIN stage in any and all of the 29 parameters compared (P > 0.1 for all). This similarity was maintained through 3 hours of incubation at 37 °C. Possibly, because of its structure, the directional freezing device promotes continuous ice nucleation so a specific CIN stage is no longer needed, thus reducing costs, energy use, and carbon footprint. PMID:26806291

  18. Diethylaminoethyl-cellulose clean-up of a large volume naphthenic acid extract.

    PubMed

    Frank, Richard A; Kavanagh, Richard; Burnison, B Kent; Headley, John V; Peru, Kerry M; Der Kraak, Glen Van; Solomon, Keith R

    2006-08-01

    The Athabasca oil sands of Alberta, Canada contain an estimated 174 billion barrels of bitumen. During oil sands refining processes, an extraction tailings mixture is produced that has been reported as toxic to aquatic organisms and is therefore collected in settling ponds on site. Investigation into the toxicity of these tailings pond waters has identified naphthenic acids (NAs) and their sodium salts as the major toxic components, and a multi-year study has been initiated to identify the principal toxic components within NA mixtures. Future toxicity studies require a large volume of a NA mixture, however, a well-defined bulk extraction technique is not available. This study investigated the use of a weak anion exchanger, diethylaminoethyl-cellulose (DEAE-cellulose), to remove humic-like material present after collecting the organic acid fraction of oil sands tailings pond water. The NA extraction and clean-up procedure proved to be a fast and efficient method to process large volumes of tailings pond water, providing an extraction efficiency of 41.2%. The resulting concentrated NA solution had a composition that differed somewhat from oil sands fresh tailings, with a reduction in the abundance of lower molecular weight NAs being the most significant difference. This reduction was mainly due to the initial acidification of tailings pond water. The DEAE-cellulose treatment had only a minor effect on the NA concentration, no noticeable effect on the NA fingerprint, and no significant effect on the mixture toxicity towards Vibrio fischeri. PMID:16469358

  19. Generation of Diffuse Large Volume Plasma by an Ionization Wave from a Plasma Jet

    NASA Astrophysics Data System (ADS)

    Laroussi, Mounir; Razavi, Hamid

    2015-09-01

    Low temperature plasma jets emitted in ambient air are the product of fast ionization waves that are guided within a channel of a gas flow, such as helium. This guided ionization wave can be transmitted through a dielectric material and under some conditions can ignite a discharge behind the dielectric material. Here we present a novel way to produce large volume diffuse low pressure plasma inside a Pyrex chamber that does not have any electrodes or electrical energy directly applied to it. The diffuse plasma is ignited inside the chamber by a plasma jet located externally to the chamber and that is physically and electrically unconnected to the chamber. Instead, the plasma jet is just brought in close proximity to the external wall/surface of the chamber or to a dielectric tubing connected to the chamber. The plasma thus generated is diffuse, large volume and with physical and chemical characteristics that are different than the external plasma jet that ignited it. So by using a plasma jet we are able to ``remotely'' ignite volumetric plasma under controlled conditions. This novel method of ``remote'' generation of a low pressure, low temperature diffuse plasma can be useful for various applications including material processing and biomedicine.

  20. Broadband frequency ECR ion source concepts with large resonant plasma volumes

    SciTech Connect

    Alton, G.D.

    1995-12-31

    New techniques are proposed for enhancing the performances of ECR ion sources. The techniques are based on the use of high-power, variable-frequency, multiple-discrete-frequency, or broadband microwave radiation, derived from standard TWT technology, to effect large resonant ``volume`` ECR sources. The creation of a large ECR plasma ``volume`` permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present forms of the ECR ion source. If successful, these developments could significantly impact future accelerator designs and accelerator-based, heavy-ion-research programs by providing multiply-charged ion beams with the energies and intensities required for nuclear physics research from existing ECR ion sources. The methods described in this article can be used to retrofit any ECR ion source predicated on B-minimum plasma confinement techniques.

  1. Controlled ice nucleation--Is it really needed for large-volume sperm cryopreservation?

    PubMed

    Saragusty, Joseph; Osmers, Jan-Hendrik; Hildebrandt, Thomas Bernd

    2016-04-15

    Controlled ice nucleation (CIN) is an integral stage of slow freezing process when relatively large volumes (usually 1 mL or larger) of biological samples in suspension are involved. Without it, a sample will supercool to way below its melting point before ice crystals start forming, resulting in multiple damaging processes. In this study, we tested the hypothesis that when freezing large volumes by the directional freezing technique, a CIN stage is not needed. Semen samples collected from ten bulls were frozen in 2.5-mL HollowTubes in a split-sample manner with and without a CIN stage. Thawed samples were evaluated for viability, acrosome integrity, rate of normal morphology, and, using computer-aided sperm analysis system, for a wide range of motility parameters that were also evaluated after 3 hours of incubation at 37 °C. Analysis of the results found no difference between freezing with and without CIN stage in any and all of the 29 parameters compared (P > 0.1 for all). This similarity was maintained through 3 hours of incubation at 37 °C. Possibly, because of its structure, the directional freezing device promotes continuous ice nucleation so a specific CIN stage is no longer needed, thus reducing costs, energy use, and carbon footprint.

  2. Large range rotation distortion measurement for remote sensing images based on volume holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxiang; Cao, Liangcai; Zhao, Tian; He, Qingsheng; Jin, Guofan

    2012-10-01

    Volume holographic optical correlator can compute the correlation results between images at a super-high speed. In the application of remote imaging processing such as scene matching, 6,000 template images have been angularly multiplexed in the photorefractive crystal and the 6,000 parallel processing channels are achieved. In order to detect the correlation pattern of images precisely and distinguishingly, an on-off pixel inverted technology of images is proposed. It can fully use the CCD's linear range for detection and expand the normalized correlation value differences as the target image rotates. Due to the natural characteristics of the remote sensing images, the statistical formulas between the rotation distortions and the correlation results can be estimated. The rotation distortion components can be estimated by curve fitting method with the data of correlation results. The intensities of the correlation spots are related to the distortion between the two images. The rotation distortion could be derived from the intensities in the post processing procedure. With 18 rotations of the input image and sending them into the volume holographic system, the detection of the rotation variation in the range of 180° can be fulfilled. So the large range rotation distortion detection is firstly realized. It offers a fast, large range rotation measurement method for image distortions.

  3. Colloids Versus Albumin in Large Volume Paracentesis to Prevent Circulatory Dysfunction: Evidence-based Case Report.

    PubMed

    Widjaja, Felix F; Khairan, Paramita; Kamelia, Telly; Hasan, Irsan

    2016-04-01

    Large volume paracentesis may cause paracentesis induced circulatory dysfunction (PICD). Albumin is recommended to prevent this abnormality. Meanwhile, the price of albumin is too expensive and there should be another alternative that may prevent PICD. This report aimed to compare albumin to colloids in preventing PICD. Search strategy was done using PubMed, Scopus, Proquest, dan Academic Health Complete from EBSCO with keywords of "ascites", "albumin", "colloid", "dextran", "hydroxyethyl starch", "gelatin", and "paracentesis induced circulatory dysfunction". Articles was limited to randomized clinical trial and meta-analysis with clinical question of "In hepatic cirrhotic patient undergone large volume paracentesis, whether colloids were similar to albumin to prevent PICD". We found one meta-analysis and four randomized clinical trials (RCT). A meta analysis showed that albumin was still superior of which odds ratio 0.34 (0.23-0.51). Three RCTs showed the same results and one RCT showed albumin was not superior than colloids. We conclude that colloids could not constitute albumin to prevent PICD, but colloids still have a role in patient who undergone paracentesis less than five liters. PMID:27550886

  4. Formation of Large-Volume High-Pressure Plasma in Triode-Configuration Discharge Devices

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Wang, Youqing

    2006-03-01

    A ``plane cathode micro-hollow anode discharge (PCHAD)'' is studied in comparison with micro-hollow cathode discharge (MHCD). A new triode-configuration discharge device is also designed for large-volume, high-pressure glow discharges plasma without glow-to-arc transitions, as well as with an anode metal needle, and a cathode of PCHAD. It has a ``needle-hole" sustained glow discharge. Its discharge circuit employs only one power supply circuit with a variable resistor. The discharge experiments have been carried out in the air. The electrical properties and the photo-images in PCHAD, multi-PCHAD and ``needle-hole" sustained discharge have been investigated. The electrical and the optical measurements show that this triode-configuration discharge device can operate stably at high-pressure, in parallel without individual ballasting resistance. And the electron density of the plasma is estimated to be up to 1012cm-3. Compared with the two-supply circuit system, this electrode configuration is very simple with lower cost in generating large-volume plasma at high pressures.

  5. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    SciTech Connect

    Smith, William S.; Bull, Jeffrey S.; Wilcox, Trevor; Bos, Randall J.; Shao, Xuan-Min; Goorley, John T.; Costigan, Keeley R.

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.

  6. Enrichment of diluted cell populations from large sample volumes using 3D carbon-electrode dielectrophoresis.

    PubMed

    Islam, Monsur; Natu, Rucha; Larraga-Martinez, Maria Fernanda; Martinez-Duarte, Rodrigo

    2016-05-01

    Here, we report on an enrichment protocol using carbon electrode dielectrophoresis to isolate and purify a targeted cell population from sample volumes up to 4 ml. We aim at trapping, washing, and recovering an enriched cell fraction that will facilitate downstream analysis. We used an increasingly diluted sample of yeast, 10(6)-10(2) cells/ml, to demonstrate the isolation and enrichment of few cells at increasing flow rates. A maximum average enrichment of 154.2 ± 23.7 times was achieved when the sample flow rate was 10 μl/min and yeast cells were suspended in low electrically conductive media that maximizes dielectrophoresis trapping. A COMSOL Multiphysics model allowed for the comparison between experimental and simulation results. Discussion is conducted on the discrepancies between such results and how the model can be further improved. PMID:27375816

  7. Enrichment of diluted cell populations from large sample volumes using 3D carbon-electrode dielectrophoresis.

    PubMed

    Islam, Monsur; Natu, Rucha; Larraga-Martinez, Maria Fernanda; Martinez-Duarte, Rodrigo

    2016-05-01

    Here, we report on an enrichment protocol using carbon electrode dielectrophoresis to isolate and purify a targeted cell population from sample volumes up to 4 ml. We aim at trapping, washing, and recovering an enriched cell fraction that will facilitate downstream analysis. We used an increasingly diluted sample of yeast, 10(6)-10(2) cells/ml, to demonstrate the isolation and enrichment of few cells at increasing flow rates. A maximum average enrichment of 154.2 ± 23.7 times was achieved when the sample flow rate was 10 μl/min and yeast cells were suspended in low electrically conductive media that maximizes dielectrophoresis trapping. A COMSOL Multiphysics model allowed for the comparison between experimental and simulation results. Discussion is conducted on the discrepancies between such results and how the model can be further improved.

  8. Improved engine wall models for Large Eddy Simulation (LES)

    NASA Astrophysics Data System (ADS)

    Plengsaard, Chalearmpol

    Improved wall models for Large Eddy Simulation (LES) are presented in this research. The classical Werner-Wengle (WW) wall shear stress model is used along with near-wall sub-grid scale viscosity. A sub-grid scale turbulent kinetic energy is employed in a model for the eddy viscosity. To gain better heat flux results, a modified classical variable-density wall heat transfer model is also used. Because no experimental wall shear stress results are available in engines, the fully turbulent developed flow in a square duct is chosen to validate the new wall models. The model constants in the new wall models are set to 0.01 and 0.8, respectively and are kept constant throughout the investigation. The resulting time- and spatially-averaged velocity and temperature wall functions from the new wall models match well with the law-of-the-wall experimental data at Re = 50,000. In order to study the effect of hot air impinging walls, jet impingement on a flat plate is also tested with the new wall models. The jet Reynolds number is equal to 21,000 and a fixed jet-to-plate spacing of H/D = 2.0. As predicted by the new wall models, the time-averaged skin friction coefficient agrees well with experimental data, while the computed Nusselt number agrees fairly well when r/D > 2.0. Additionally, the model is validated using experimental data from a Caterpillar engine operated with conventional diesel combustion. Sixteen different operating engine conditions are simulated. The majority of the predicted heat flux results from each thermocouple location follow similar trends when compared with experimental data. The magnitude of peak heat fluxes as predicted by the new wall models is in the range of typical measured values in diesel combustion, while most heat flux results from previous LES wall models are over-predicted. The new wall models generate more accurate predictions and agree better with experimental data.

  9. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.; Vidoni, T. J.

    1991-01-01

    The main objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. In the efforts related to LES, we were concerned with developing reliable subgrid closures for modeling of the fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we focused our attention to further investigation of the effects of exothermicity in compressible turbulent flows. In our previous work, in the first year of this research, we have considered only 'simple' flows. Currently, we are in the process of extending our analyses for the purpose of modeling more practical flows of current interest at LaRC. A summary of our accomplishments during the third six months of the research is presented.

  10. The Biological Response following Autogenous Bone Grafting for Large-Volume Defects of the Knee

    PubMed Central

    DeLano, Mark C.; Spector, Myron; Jeng, Lily; Pittsley, Andrew; Gottschalk, Alexander

    2012-01-01

    Objective: This report focuses on the biological events occurring at various intervals following autogenous bone grafting of large-volume defects of the knee joint’s femoral condyle secondary to osteochondritis dissecans (OCD) or osteonecrosis (ON). It was hypothesized that the autogenous bone graft would integrate and the portion exposed to the articular surface would form fibrocartilage, which would endure for years. Methods: Between September 29, 1987 and August 8, 1994, there were 51 patients treated with autogenous bone grafting for large-volume osteochondral defects. Twenty-five of the 51 patients were available for long-term follow-up up to 21 years. Patient follow-up was accomplished by clinical opportunity and intentional research. Videotapes were available on all index surgeries for review and comparison. All had preoperative and postoperative plain film radiographs. Long-term follow-up included MRI up to 21 years. Second-look arthroscopy and biopsy were obtained on 14 patients between 8 weeks and 20 years. Results: Radiological assessment showed the autogenous bone grafts integrated with the host bone. The grafts retained the physical geometry of the original placement. MRI showed soft tissue covering the grafts in all cases at long-term follow-up. Interval biopsy showed the surface covered with fibrous tissue at 8 weeks and subsequently converted to fibrocartilage with hyaline cartilage at 20 years. Conclusion: Autogenous bone grafting provides a matrix for large osteochondral defects that integrates with the host bone and results in a surface repair of fibrocartilage and hyaline cartilage that can endure for up to 20 years. PMID:26069622

  11. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  12. GCM Simulation of the Large-scale North American Monsoon Including Water Vapor Tracer Diagnostics

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Walker, Gregory; Schubert, Siegfried D.; Sud, Yogesh; Atlas, Robert M. (Technical Monitor)

    2001-01-01

    The geographic sources of water for the large-scale North American monsoon in a GCM are diagnosed using passive constituent tracers of regional water'sources (Water Vapor Tracers, WVT). The NASA Data Assimilation Office Finite Volume (FV) GCM was used to produce a 10-year simulation (1984 through 1993) including observed sea surface temperature. Regional and global WVT sources were defined to delineate the surface origin of water for precipitation in and around the North American i'vionsoon. The evolution of the mean annual cycle and the interannual variations of the monsoonal circulation will be discussed. Of special concern are the relative contributions of the local source (precipitation recycling) and remote sources of water vapor to the annual cycle and the interannual variation of warm season precipitation. The relationships between soil water, surface evaporation, precipitation and precipitation recycling will be evaluated.

  13. GCM Simulation of the Large-Scale North American Monsoon Including Water Vapor Tracer Diagnostics

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Walker, Gregory; Schubert, Siegfried D.; Sud, Yogesh; Atlas, Robert M. (Technical Monitor)

    2002-01-01

    The geographic sources of water for the large scale North American monsoon in a GCM (General Circulation Model) are diagnosed using passive constituent tracers of regional water sources (Water Vapor Tracers, WVT). The NASA Data Assimilation Office Finite Volume (FV) GCM was used to produce a 10-year simulation (1984 through 1993) including observed sea surface temperature. Regional and global WVT sources were defined to delineate the surface origin of water for precipitation in and around the North American Monsoon. The evolution of the mean annual cycle and the interannual variations of the monsoonal circulation will be discussed. Of special concern are the relative contributions of the local source (precipitation recycling) and remote sources of water vapor to the annual cycle and the interannual variation of monsoonal precipitation. The relationships between soil water, surface evaporation, precipitation and precipitation recycling will be evaluated.

  14. A scalable messaging system for accelerating discovery from large scale scientific simulations

    SciTech Connect

    Jin, Tong; Zhang, Fan; Parashar, Manish; Klasky, Scott A; Podhorszki, Norbert; Abbasi, Hasan

    2012-01-01

    Emerging scientific and engineering simulations running at scale on leadership-class High End Computing (HEC) environments are producing large volumes of data, which has to be transported and analyzed before any insights can result from these simulations. The complexity and cost (in terms of time and energy) associated with managing and analyzing this data have become significant challenges, and are limiting the impact of these simulations. Recently, data-staging approaches along with in-situ and in-transit analytics have been proposed to address these challenges by offloading I/O and/or moving data processing closer to the data. However, scientists continue to be overwhelmed by the large data volumes and data rates. In this paper we address this latter challenge. Specifically, we propose a highly scalable and low-overhead associative messaging framework that runs on the data staging resources within the HEC platform, and builds on the staging-based online in-situ/in- transit analytics to provide publish/subscribe/notification-type messaging patterns to the scientist. Rather than having to ingest and inspect the data volumes, this messaging system allows scientists to (1) dynamically subscribe to data events of interest, e.g., simple data values or a complex function or simple reduction (max()/min()/avg()) of the data values in a certain region of the application domain is greater/less than a threshold value, or certain spatial/temporal data features or data patterns are detected; (2) define customized in-situ/in-transit actions that are triggered based on the events, such as data visualization or transformation; and (3) get notified when these events occur. The key contribution of this paper is a design and implementation that can support such a messaging abstraction at scale on high- end computing (HEC) systems with minimal overheads. We have implemented and deployed the messaging system on the Jaguar Cray XK6 machines at Oak Ridge National Laboratory and the

  15. Calcium Isolation from Large-Volume Human Urine Samples for 41Ca Analysis by Accelerator Mass Spectrometry

    PubMed Central

    Miller, James J; Hui, Susanta K; Jackson, George S; Clark, Sara P; Einstein, Jane; Weaver, Connie M; Bhattacharyya, Maryka H

    2013-01-01

    Calcium oxalate precipitation is the first step in preparation of biological samples for 41Ca analysis by accelerator mass spectrometry. A simplified protocol for large-volume human urine samples was characterized, with statistically significant increases in ion current and decreases in interference. This large-volume assay minimizes cost and effort and maximizes time after 41Ca administration during which human samples, collected over a lifetime, provide 41Ca:Ca ratios that are significantly above background. PMID:23672965

  16. Large-eddy simulation of bubble-driven plume in stably stratified flow.

    NASA Astrophysics Data System (ADS)

    Yang, Di; Chen, Bicheng; Socolofsky, Scott; Chamecki, Marcelo; Meneveau, Charles

    2015-11-01

    The interaction between a bubble-driven plume and stratified water column plays a vital role in many environmental and engineering applications. As the bubbles are released from a localized source, they induce a positive buoyancy flux that generates an upward plume. As the plume rises, it entrains ambient water, and when the plume rises to a higher elevation where the stratification-induced negative buoyancy is sufficient, a considerable fraction of the entrained fluid detrains, or peels, to form a downward outer plume and a lateral intrusion layer. In the case of multiphase plumes, the intrusion layer may also trap weakly buoyant particles (e.g., oil droplets in the case of a subsea accidental blowout). In this study, the complex plume dynamics is studied using large-eddy simulation (LES), with the flow field simulated by hybrid pseudospectral/finite-difference scheme, and the bubble and dye concentration fields simulated by finite-volume scheme. The spatial and temporal characteristics of the buoyant plume are studied, with a focus on the effects of different bubble buoyancy levels. The LES data provide useful mean plume statistics for evaluating the accuracy of 1-D engineering models for entrainment and peeling fluxes. Based on the insights learned from the LES, a new continuous peeling model is developed and tested. Study supported by the Gulf of Mexico Research Initiative (GoMRI).

  17. Design, simulation, and optimization of an RGB polarization independent transmission volume hologram

    NASA Astrophysics Data System (ADS)

    Mahamat, Adoum Hassan

    Volume phase holographic (VPH) gratings have been designed for use in many areas of science and technology such as optical communication, medical imaging, spectroscopy and astronomy. The goal of this dissertation is to design a volume phase holographic grating that provides diffraction efficiencies of at least 70% for the entire visible wavelengths and higher than 90% for red, green, and blue light when the incident light is unpolarized. First, the complete design, simulation and optimization of the volume hologram are presented. The optimization is done using a Monte Carlo analysis to solve for the index modulation needed to provide higher diffraction efficiencies. The solutions are determined by solving the diffraction efficiency equations determined by Kogelnik's two wave coupled-wave theory. The hologram is further optimized using the rigorous coupled-wave analysis to correct for effects of absorption omitted by Kogelnik's method. Second, the fabrication or recording process of the volume hologram is described in detail. The active region of the volume hologram is created by interference of two coherent beams within the thin film. Third, the experimental set up and measurement of some properties including the diffraction efficiencies of the volume hologram, and the thickness of the active region are conducted. Fourth, the polarimetric response of the volume hologram is investigated. The polarization study is developed to provide insight into the effect of the refractive index modulation onto the polarization state and diffraction efficiency of incident light.

  18. Large-Volume Resonant Microwave Discharge for Plasma Cleaning of a CEBAF 5-Cell SRF Cavity

    SciTech Connect

    J. Mammosser, S. Ahmed, K. Macha, J. Upadhyay, M. Nikoli, S. Popovi, L. Vuakovi

    2012-07-01

    We report the preliminary results on plasma generation in a 5-cell CEBAF superconducting radio-frequency (SRF) cavity for the application of cavity interior surface cleaning. CEBAF currently has {approx}300 of these five cell cavities installed in the Jefferson Lab accelerator which are mostly limited by cavity surface contamination. The development of an in-situ cavity surface cleaning method utilizing a resonant microwave discharge could lead to significant CEBAF accelerator performance improvement. This microwave discharge is currently being used for the development of a set of plasma cleaning procedures targeted to the removal of various organic, metal and metal oxide impurities. These contaminants are responsible for the increase of surface resistance and the reduction of RF performance in installed cavities. The CEBAF five cell cavity volume is {approx} 0.5 m2, which places the discharge in the category of large-volume plasmas. CEBAF cavity has a cylindrical symmetry, but its elliptical shape and transversal power coupling makes it an unusual plasma application, which requires special consideration of microwave breakdown. Our preliminary study includes microwave breakdown and optical spectroscopy, which was used to define the operating pressure range and the rate of removal of organic impurities.

  19. A large volume uniform plasma generator for the experiments of electromagnetic wave propagation in plasma

    SciTech Connect

    Yang Min; Li Xiaoping; Xie Kai; Liu Donglin; Liu Yanming

    2013-01-15

    A large volume uniform plasma generator is proposed for the experiments of electromagnetic (EM) wave propagation in plasma, to reproduce a 'black out' phenomenon with long duration in an environment of the ordinary laboratory. The plasma generator achieves a controllable approximate uniform plasma in volume of 260 mm Multiplication-Sign 260 mm Multiplication-Sign 180 mm without the magnetic confinement. The plasma is produced by the glow discharge, and the special discharge structure is built to bring a steady approximate uniform plasma environment in the electromagnetic wave propagation path without any other barriers. In addition, the electron density and luminosity distributions of plasma under different discharge conditions were diagnosed and experimentally investigated. Both the electron density and the plasma uniformity are directly proportional to the input power and in roughly reverse proportion to the gas pressure in the chamber. Furthermore, the experiments of electromagnetic wave propagation in plasma are conducted in this plasma generator. Blackout phenomena at GPS signal are observed under this system and the measured attenuation curve is of reasonable agreement with the theoretical one, which suggests the effectiveness of the proposed method.

  20. Manganese content of large-volume parenteral solutions and of nutrient additives.

    PubMed

    Kurkus, J; Alcock, N W; Shils, M E

    1984-01-01

    Manganese (Mn) was analyzed by flameless atomic absorption spectrophotometry in a variety of commercially produced solutions and additives commonly used in total parenteral nutrition (TPN). The amount of Mn in preparations tested varied among manufacturers and among lots. It was generally present in very small amounts with amino acid preparations supplying the major portion in the TPN formulas. Among amino acid solutions, Aminosyn 10% had the highest Mn content (5.2-17.0 micrograms/liter) with Veinamine 8%, FreAmine II, 8.5%, Travasol 10%, and Nephramine having less than 6.7 micrograms/liter. Other large volume parenterals contained appreciably less Mn, eg, Dextrose 50% had 0.64-2.5 micrograms/liter. Some of the additives were high in Mn, eg, potassium phosphate--280 micrograms/liter, magnesium sulfate 50%--up to 225 micrograms/liter, and Berocca C--245.8 micrograms/liter but their actual contributions to daily TPN intake was no more than 3.3 micrograms. The calculated Mn content in TPN formulas with varying source materials ranged from 8.07-21.75 micrograms per total daily volume. These values agreed with those obtained from analysis of actual TPN solutions. The values for 10% Intralipid and 20% Liposyn were 0.5 and 3.0 micrograms/liter, respectively.

  1. A scanning transmission electron microscopy approach to analyzing large volumes of tissue to detect nanoparticles.

    PubMed

    Kempen, Paul J; Thakor, Avnesh S; Zavaleta, Cristina; Gambhir, Sanjiv S; Sinclair, Robert

    2013-10-01

    The use of nanoparticles for the diagnosis and treatment of cancer requires the complete characterization of their toxicity, including accurately locating them within biological tissues. Owing to their size, traditional light microscopy techniques are unable to resolve them. Transmission electron microscopy provides the necessary spatial resolution to image individual nanoparticles in tissue, but is severely limited by the very small analysis volume, usually on the order of tens of cubic microns. In this work, we developed a scanning transmission electron microscopy (STEM) approach to analyze large volumes of tissue for the presence of polyethylene glycol-coated Raman-active-silica-gold-nanoparticles (PEG-R-Si-Au-NPs). This approach utilizes the simultaneous bright and dark field imaging capabilities of STEM along with careful control of the image contrast settings to readily identify PEG-R-Si-Au-NPs in mouse liver tissue without the need for additional time-consuming analytical characterization. We utilized this technique to analyze 243,000 mm³ of mouse liver tissue for the presence of PEG-R-Si-Au-NPs. Nanoparticles injected into the mice intravenously via the tail vein accumulated in the liver, whereas those injected intrarectally did not, indicating that they remain in the colon and do not pass through the colon wall into the systemic circulation.

  2. Monte Carlo calculations of the HPGe detector efficiency for radioactivity measurement of large volume environmental samples.

    PubMed

    Azbouche, Ahmed; Belgaid, Mohamed; Mazrou, Hakim

    2015-08-01

    A fully detailed Monte Carlo geometrical model of a High Purity Germanium detector with a (152)Eu source, packed in Marinelli beaker, was developed for routine analysis of large volume environmental samples. Then, the model parameters, in particular, the dead layer thickness were adjusted thanks to a specific irradiation configuration together with a fine-tuning procedure. Thereafter, the calculated efficiencies were compared to the measured ones for standard samples containing (152)Eu source filled in both grass and resin matrices packed in Marinelli beaker. From this comparison, a good agreement between experiment and Monte Carlo calculation results was obtained highlighting thereby the consistency of the geometrical computational model proposed in this work. Finally, the computational model was applied successfully to determine the (137)Cs distribution in soil matrix. From this application, instructive results were achieved highlighting, in particular, the erosion and accumulation zone of the studied site.

  3. Practical Gamma Spectroscopy Assay Techniques for Large Volume Low-Level Waste Boxes

    SciTech Connect

    Myers, S. C.; Gruetzmacher, K.; Sheffing, C. C.; Gallegos, L.; Bustos, R.

    2002-02-26

    A study was conducted at the Los Alamos National Laboratory (LANL) to evaluate the performance of the SNAP (Spectral Nondestructive Assay Platform) analytical software for measurements of known standards in large metal waste boxes (2.5 m3 volume). The trials were designed to test the accuracy and variance of the analytical results for low-density combustible matrices and higher-density metal matrices at two discrete gamma-ray energies: 121.78 keV and 411.12 keV. For both matrix types the measurement method that produced the most accurate results with the lowest associated standard deviation involved combining four individual measurements taken at the geometric center of each of the box's four vertical sides.

  4. Isolation of organic acids from large volumes of water by adsorption on macroporous resins

    USGS Publications Warehouse

    Aiken, George R.; Suffet, I.H.; Malaiyandi, Murugan

    1987-01-01

    Adsorption on synthetic macroporous resins, such as the Amberlite XAD series and Duolite A-7, is routinely used to isolate and concentrate organic acids from forge volumes of water. Samples as large as 24,500 L have been processed on site by using these resins. Two established extraction schemes using XAD-8 and Duolite A-7 resins are described. The choice of the appropriate resin and extraction scheme is dependent on the organic solutes of interest. The factors that affect resin performance, selectivity, and capacity for a particular solute are solution pH, resin surface area and pore size, and resin composition. The logistical problems of sample handling, filtration, and preservation are also discussed.

  5. A large volume 2000 MPA air source for the radiatively driven hypersonic wind tunnel

    SciTech Connect

    Constantino, M

    1999-07-14

    An ultra-high pressure air source for a hypersonic wind tunnel for fluid dynamics and combustion physics and chemistry research and development must provide a 10 kg/s pure air flow for more than 1 s at a specific enthalpy of more than 3000 kJ/kg. The nominal operating pressure and temperature condition for the air source is 2000 MPa and 900 K. A radial array of variable radial support intensifiers connected to an axial manifold provides an arbitrarily large total high pressure volume. This configuration also provides solutions to cross bore stress concentrations and the decrease in material strength with temperature. [hypersonic, high pressure, air, wind tunnel, ground testing

  6. Aerodynamics of the Large-Volume, Flow-Through Detector System. Final report

    SciTech Connect

    Reed, H.; Saric, W.; Laananen, D.; Martinez, C.; Carrillo, R.; Myers, J.; Clevenger, D.

    1996-03-01

    The Large-Volume Flow-Through Detector System (LVFTDS) was designed to monitor alpha radiation from Pu, U, and Am in mixed-waste incinerator offgases; however, it can be adapted to other important monitoring uses that span a number of potential markets, including site remediation, indoor air quality, radon testing, and mine shaft monitoring. Goal of this effort was to provide mechanical design information for installation of LVFTDS in an incinerator, with emphasis on ability to withstand the high temperatures and high flow rates expected. The work was successfully carried out in three stages: calculation of pressure drop through the system, materials testing to determine surrogate materials for wind-tunnel testing, and wind-tunnel testing of an actual configuration.

  7. Isolation of organic acids from large volumes of water by adsorption chromatography

    USGS Publications Warehouse

    Aiken, George R.

    1984-01-01

    The concentrations of dissolved organic carbon from most natural waters ranges from 1 to 20 milligrams carbon per liter, of which approximately 75 percent are organic acids. These acids can be chromatographically fractionated into hydrophobic organic acids, such as humic substances, and hydrophilic organic acids. To effectively study any of these organic acids, they must be isolated from other organic and inorganic species, and concentrated. Usually, large volumes of water must be processed to obtain sufficient quantities of material, and adsorption chromatography on synthetic, macroporous resins has proven to be a particularly effective method for this purpose. The use of the nonionic Amberlite XAD-8 and Amberlite XAD-4 resins and the anion exchange resin Duolite A-7 for isolating and concentrating organic acids from water is presented.

  8. The position response of a large-volume segmented germanium detector

    NASA Astrophysics Data System (ADS)

    Descovich, M.; Nolan, P. J.; Boston, A. J.; Dobson, J.; Gros, S.; Cresswell, J. R.; Simpson, J.; Lazarus, I.; Regan, P. H.; Valiente-Dobon, J. J.; Sellin, P.; Pearson, C. J.

    2005-11-01

    The position response of a large-volume segmented coaxial germanium detector is reported. The detector has 24-fold segmentation on its outer contact. The output from each contact was sampled with fast digital signal processing electronics in order to determine the position of the γ-ray interaction from the signal pulse shape. The interaction position was reconstructed in a polar coordinate system by combining the radial information, contained in the rise-time of the pulse leading edge, with the azimuthal information, obtained from the magnitude of the transient charge signals induced on the neighbouring segments. With this method, a position resolution of 3-7 mm is achieved in both the radial and the azimuthal directions.

  9. Measurement of the velocity of neutrinos from the CNGS beam with the large volume detector.

    PubMed

    Agafonova, N Yu; Aglietta, M; Antonioli, P; Ashikhmin, V V; Bari, G; Bertoni, R; Bressan, E; Bruno, G; Dadykin, V L; Fulgione, W; Galeotti, P; Garbini, M; Ghia, P L; Giusti, P; Kemp, E; Mal'gin, A S; Miguez, B; Molinario, A; Persiani, R; Pless, I A; Ryasny, V G; Ryazhskaya, O G; Saavedra, O; Sartorelli, G; Shakyrianova, I R; Selvi, M; Trinchero, G C; Vigorito, C; Yakushev, V F; Zichichi, A; Razeto, A

    2012-08-17

    We report the measurement of the time of flight of ∼17 GeV ν(μ) on the CNGS baseline (732 km) with the Large Volume Detector (LVD) at the Gran Sasso Laboratory. The CERN-SPS accelerator has been operated from May 10th to May 24th 2012, with a tightly bunched-beam structure to allow the velocity of neutrinos to be accurately measured on an event-by-event basis. LVD has detected 48 neutrino events, associated with the beam, with a high absolute time accuracy. These events allow us to establish the following limit on the difference between the neutrino speed and the light velocity: -3.8 × 10(-6) < (v(ν)-c)/c < 3.1 × 10(-6) (at 99% C.L.). This value is an order of magnitude lower than previous direct measurements. PMID:23006352

  10. Semi-analytic simulations of galactic winds: volume filling factor, ejection of metals and parameter study

    NASA Astrophysics Data System (ADS)

    Bertone, Serena; Stoehr, Felix; White, Simon D. M.

    2005-06-01

    We present a semi-analytic treatment of galactic winds within high-resolution, large-scale cosmological N-body simulations of a Λ cold dark matter (ΛCDM) universe. The evolution of winds is investigated by following the expansion of supernova-driven superbubbles around the several hundred thousand galaxies that form in an approximately spherical region of space with diameter 52h-1Mpc and mean density close to the mean density of the universe. We focus our attention on the impact of winds on the diffuse intergalactic medium. Initial conditions for mass loss at the base of winds are taken from Shu, Mo & Mao. Results are presented for the volume filling factor and the mass fraction of the intergalactic medium (IGM) affected by winds, and their dependence on the model parameters is carefully investigated. The mass-loading efficiency of bubbles is a key factor to determine the evolution of winds and their global impact on the IGM: the higher the mass loading, the later the IGM is enriched with metals. Galaxies with 109 < M* < 1010Msolar are responsible for most of the metals ejected into the IGM at z= 3, while galaxies with M* < 109Msolargive a non-negligible contribution only at higher redshifts, when larger galaxies have not yet assembled. We find a higher mean IGM metallicity than Lyα forest observations suggest, and we argue that the discrepancy may be explained by the high temperatures of a large fraction of the metals in winds, which may not leave detectable imprints in absorption in the Lyα forest.

  11. Anatomic Landmarks Versus Fiducials for Volume-Staged Gamma Knife Radiosurgery for Large Arteriovenous Malformations

    SciTech Connect

    Petti, Paula L. . E-mail: ppetti@radonc.ucsf.edu; Coleman, Joy; McDermott, Michael; Smith, Vernon; Larson, David A.

    2007-04-01

    Purpose: The purpose of this investigation was to compare the accuracy of using internal anatomic landmarks instead of surgically implanted fiducials in the image registration process for volume-staged gamma knife (GK) radiosurgery for large arteriovenous malformations. Methods and Materials: We studied 9 patients who had undergone 10 staged GK sessions for large arteriovenous malformations. Each patient had fiducials surgically implanted in the outer table of the skull at the first GK treatment. These markers were imaged on orthogonal radiographs, which were scanned into the GK planning system. For the same patients, 8-10 pairs of internal landmarks were retrospectively identified on the three-dimensional time-of-flight magnetic resonance imaging studies that had been obtained for treatment. The coordinate transformation between the stereotactic frame space for subsequent treatment sessions was then determined by point matching, using four surgically embedded fiducials and then using four pairs of internal anatomic landmarks. In both cases, the transformation was ascertained by minimizing the chi-square difference between the actual and the transformed coordinates. Both transformations were then evaluated using the remaining four to six pairs of internal landmarks as the test points. Results: Averaged over all treatment sessions, the root mean square discrepancy between the coordinates of the transformed and actual test points was 1.2 {+-} 0.2 mm using internal landmarks and 1.7 {+-} 0.4 mm using the surgically implanted fiducials. Conclusion: The results of this study have shown that using internal landmarks to determine the coordinate transformation between subsequent magnetic resonance imaging scans for volume-staged GK arteriovenous malformation treatment sessions is as accurate as using surgically implanted fiducials and avoids an invasive procedure.

  12. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application

  13. A finite volume solver for three dimensional debris flow simulations based on a single calibration parameter

    NASA Astrophysics Data System (ADS)

    von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian; Rickenmann, Dieter

    2016-04-01

    Debris flows are frequent natural hazards that cause massive damage. A wide range of debris flow models try to cover the complex flow behavior that arises from the inhomogeneous material mixture of water with clay, silt, sand, and gravel. The energy dissipation between moving grains depends on grain collisions and tangential friction, and the viscosity of the interstitial fine material suspension depends on the shear gradient. Thus a rheology description needs to be sensitive to the local pressure and shear rate, making the three-dimensional flow structure a key issue for flows in complex terrain. Furthermore, the momentum exchange between the granular and fluid phases should account for the presence of larger particles. We model the fine material suspension with a Herschel-Bulkley rheology law, and represent the gravel with the Coulomb-viscoplastic rheology of Domnik & Pudasaini (Domnik et al. 2013). Both composites are described by two phases that can mix; a third phase accounting for the air is kept separate to account for the free surface. The fluid dynamics are solved in three dimensions using the finite volume open-source code OpenFOAM. Computational costs are kept reasonable by using the Volume of Fluid method to solve only one phase-averaged system of Navier-Stokes equations. The Herschel-Bulkley parameters are modeled as a function of water content, volumetric solid concentration of the mixture, clay content and its mineral composition (Coussot et al. 1989, Yu et al. 2013). The gravel phase properties needed for the Coulomb-viscoplastic rheology are defined by the angle of repose of the gravel. In addition to this basic setup, larger grains and the corresponding grain collisions can be introduced by a coupled Lagrangian particle simulation. Based on the local Savage number a diffusive term in the gravel phase can activate phase separation. The resulting model can reproduce the sensitivity of the debris flow to water content and channel bed roughness, as

  14. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.

    1993-01-01

    The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.

  15. Determination of 235U enrichment with a large volume CZT detector

    NASA Astrophysics Data System (ADS)

    Mortreau, Patricia; Berndt, Reinhard

    2006-01-01

    Room-temperature CdZnTe and CdTe detectors have been routinely used in the field of Nuclear Safeguards for many years [Ivanov et al., Development of large volume hemispheric CdZnTe detectors for use in safeguards applications, ESARDA European Safeguards Research and Development Association, Le Corum, Montpellier, France, 1997, p. 447; Czock and Arlt, Nucl. Instr. and Meth. A 458 (2001) 175; Arlt et al., Nucl. Instr. and Meth. A 428 (1999) 127; Lebrun et al., Nucl. Instr. and Meth. A 448 (2000) 598; Aparo et al., Development and implementation of compact gamma spectrometers for spent fuel measurements, in: Proceedings, 21st Annual ESARDA, 1999; Arlt and Rudsquist, Nucl. Instr. and Meth. A 380 (1996) 455; Khusainov et al., High resolution pin type CdTe detectors for the verification of nuclear material, in: Proceedings, 17th Annual ESARDA European Safeguards Research and Development Association, 1995; Mortreau and Berndt, Nucl. Instr. and Meth. A 458 (2001) 183; Ruhter et al., UCRL-JC-130548, 1998; Abbas et al., Nucl. Instr. and Meth. A 405 (1998) 153; Ruhter and Gunnink, Nucl. Instr. and Meth. A 353 (1994) 716]. Due to their performance and small size, they are ideal detectors for hand-held applications such as verification of spent and fresh fuel, U/Pu attribute tests as well as for the determination of 235U enrichment. The hemispherical CdZnTe type produced by RITEC (Riga, Latvia) [Ivanov et al., 1997] is the most widely used detector in the field of inspection. With volumes ranging from 2 to 1500 mm 3, their spectral performance is such that the use of electronic processing to correct the pulse shape is not required. This paper reports on the work carried out with a large volume (15×15×7.5 mm 3) and high efficiency hemispherical CdZnTe detector for the determination of 235U enrichment. The measurements were made with certified uranium samples whose enrichment ranging from 0.31% to 92.42%, cover the whole range of in-field measurement conditions. The interposed

  16. Designing an elastomeric binder for large-volume-change electrodes for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Chen, Zonghai

    It is of commercial importance to develop high capacity negative and positive electrode materials for lithium-ion batteries to meet the energy requirements of portable electronic devices. Excellent capacity retention has been achieved for thin sputtered films of amorphous Si, Ge and Si-Sn alloys even when cycled to 2000 mAh/g and above, which suggests that amorphous alloys are capable of extended cycling. However, PVDF-based composite electrodes incorporating a-Si0.64Sn0.36/Ag powder (10 wt% silver coating) (˜10mum) still suffer from severe capacity fading because of the huge volumetric changes of a-Si0.64Sn0.36/Ag during charge/discharge cycling. It is the objective of this thesis to understand the problem scientifically and to propose practical solutions to solve this problem. Mechanical studies of binders for lithium battery electrodes have never been reported in the literature. The mechanical properties of commonly used binders, such as poly(vinylidene fluoride) (PVDF), haven't been challenged because commercially used active materials, such as LiCoO2 and graphite, have small volumetric changes (<10%) during charge/discharge cycling. However, the recently proposed metallic alloys have huge volumetric changes (up to 250%) during cycling. In this case, the mechanical properties of the binder become critical. A tether model is proposed to qualitatively understand the capacity fading of high-volume-change electrodes, and to predict the properties of a good binder system. A crosslinking/coupling route was used to modify the binder system according to the requirements of the tether model. A poly(vinylidene fluoride-tetrafluoroethylenepropylene)-based elastomeric binder system was designed to successfully improve the capacity retention of a-Si0.64 Sn0.36/Ag composite electrodes. In this thesis, it has also proven nontrivial to maximize the capacity retention of large-volume-change electrodes even when a fixed elastomeric binder system was used. The parameters that

  17. Large liquid rocket engine transient performance simulation system

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Southwick, R. D.

    1991-01-01

    A simulation system, ROCETS, was designed and developed to allow cost-effective computer predictions of liquid rocket engine transient performance. The system allows a user to generate a simulation of any rocket engine configuration using component modules stored in a library through high-level input commands. The system library currently contains 24 component modules, 57 sub-modules and maps, and 33 system routines and utilities. FORTRAN models from other sources can be operated in the system upon inclusion of interface information on comment cards. Operation of the simulation is simplified for the user by run, execution, and output processors. The simulation system makes available steady-state trim balance, transient operation, and linear partial generation. The system utilizes a modern equation solver for efficient operation of the simulations. Transient integration methods include integral and differential forms for the trapezoidal, first order Gear, and second order Gear corrector equations. A detailed technology test bed engine (TTBE) model was generated to be used as the acceptance test of the simulation system. The general level of model detail was that reflected in the Space Shuttle Main Engine DTM. The model successfully obtained steady-state balance in main stage operation and simulated throttle transients, including engine starts and shutdown. A NASA FORTRAN control model was obtained, ROCETS interface installed in comment cards, and operated with the TTBE model in closed-loop transient mode.

  18. Large Volume Coagulation Utilizing Multiple Cavitation Clouds Generated by Array Transducer Driven by 32 Channel Drive Circuits

    NASA Astrophysics Data System (ADS)

    Nakamura, Kotaro; Asai, Ayumu; Sasaki, Hiroshi; Yoshizawa, Shin; Umemura, Shin-ichiro

    2013-07-01

    High-intensity focused ultrasound (HIFU) treatment is a noninvasive treatment, in which focused ultrasound is generated outside the body and coagulates a diseased tissue. The advantage of this method is minimal physical and mental stress to the patient, and the disadvantage is the long treatment time caused by the smallness of the therapeutic volume by a single exposure. To improve the efficiency and shorten the treatment time, we are focusing attention on utilizing cavitation bubbles. The generated microbubbles can convert the acoustic energy into heat with a high efficiency. In this study, using the class D amplifiers, which we have developed, to drive the array transducer, we demonstrate a new method to coagulate a large volume by a single HIFU exposure through generating cavitation bubbles distributing in a large volume and vibrating all of them. As a result, the coagulated volume by the proposed method was 1.71 times as large as that of the conventional method.

  19. Large-volume hot spots in gold spiky nanoparticle dimers for high-performance surface-enhanced spectroscopy.

    PubMed

    Li, Anran; Li, Shuzhou

    2014-11-01

    Hot spots with a large electric field enhancement usually come in small volumes, limiting their applications in surface-enhanced spectroscopy. Using a finite-difference time-domain method, we demonstrate that spiky nanoparticle dimers (SNPD) can provide hot spots with both large electric field enhancement and large volumes because of the pronounced lightning rod effect of spiky nanoparticles. We find that the strongest electric fields lie in the gap region when SNPD is in a tip-to-tip (T-T) configuration. The enhancement of electric fields (|E|(2)/|E0|(2)) in T-T SNPD with a 2 nm gap can be as large as 1.21 × 10(6). And the hot spot volume in T-T SNPD is almost 7 times and 5 times larger than those in the spike dimer and sphere dimer with the same gap size of 2 nm, respectively. The hot spot volume in SNPD can be further improved by manipulating the arrangements of spiky nanoparticles, where crossed T-T SNPD provides the largest hot spot volume, which is 1.5 times that of T-T SNPD. Our results provide a strategy to obtain hot spots with both intense electric fields and large volume by adding a bulky core at one end of the spindly building block in dimers. PMID:25233050

  20. Development of a Matheatical Dynamic Simulation Model for the New Motion Simulator Used for the Large Space Simulator at ESTEC

    NASA Astrophysics Data System (ADS)

    Messing, Rene

    2012-07-01

    To simulate environmental space conditions for space- craft qualification testing the European Space Agency ESA uses a Large Space Simulator (LSS) in its Test Centre in Noordwijk, the Netherlands. In the LSS a motion system is used, to provide the orientation of an up to five tons heavy spacecraft with respect to an artificial solar beam. The existing motion simulation will be replaced by a new motion system. The new motion system shall be able to orient a spacecraft, defined by its elevation and azimuth angle and provide an eclipse simulation (continuous spinning) around the spacecraft rotation axis. The development of the new motion system has been contracted to APCO Technologies in Switzerland. In addition to the design development done by the con- tractor the Engineering section of the ESTEC Test Centre is in parallel developing a mathematical model simulating the dynamic behaviour of the system. The model shall to serve, during the preliminary design, to verify the selection of the drive units and define the specimen trajectory speed and acceleration profiles. In the further design phase it shall verify the dynamic response, at the spacecraft mounting interface of the unloaded system, against the requirements. In the future it shall predict the dynamic responses of the implemented system for different spacecraft being mounted and operated onto the system. The paper shall give a brief description of the investment history and design developments of the new motion system for the LSS and then give a brief description the different developments steps which are foreseen and which have been already implemented in the mathematical simulation model.

  1. Secure Large-Scale Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Dan (Technical Monitor)

    2001-01-01

    To fully conduct research that will support the far-term concepts, technologies and methods required to improve the safety of Air Transportation a simulation environment of the requisite degree of fidelity must first be in place. The Virtual National Airspace Simulation (VNAS) will provide the underlying infrastructure necessary for such a simulation system. Aerospace-specific knowledge management services such as intelligent data-integration middleware will support the management of information associated with this complex and critically important operational environment. This simulation environment, in conjunction with a distributed network of supercomputers, and high-speed network connections to aircraft, and to Federal Aviation Administration (FAA), airline and other data-sources will provide the capability to continuously monitor and measure operational performance against expected performance. The VNAS will also provide the tools to use this performance baseline to obtain a perspective of what is happening today and of the potential impact of proposed changes before they are introduced into the system.

  2. Efficient Coalescent Simulation and Genealogical Analysis for Large Sample Sizes

    PubMed Central

    Kelleher, Jerome; Etheridge, Alison M; McVean, Gilean

    2016-01-01

    A central challenge in the analysis of genetic variation is to provide realistic genome simulation across millions of samples. Present day coalescent simulations do not scale well, or use approximations that fail to capture important long-range linkage properties. Analysing the results of simulations also presents a substantial challenge, as current methods to store genealogies consume a great deal of space, are slow to parse and do not take advantage of shared structure in correlated trees. We solve these problems by introducing sparse trees and coalescence records as the key units of genealogical analysis. Using these tools, exact simulation of the coalescent with recombination for chromosome-sized regions over hundreds of thousands of samples is possible, and substantially faster than present-day approximate methods. We can also analyse the results orders of magnitude more quickly than with existing methods. PMID:27145223

  3. All-speed Roe scheme for the large eddy simulation of homogeneous decaying turbulence

    NASA Astrophysics Data System (ADS)

    Li, Xue-song; Li, Xin-liang

    2016-01-01

    As a type of shock-capturing scheme, the traditional Roe scheme fails in large eddy simulation (LES) because it cannot reproduce important turbulent characteristics, such as the famous k-5/3 spectral law, as a consequence of the large numerical dissipation. In this work, the Roe scheme is divided into five parts, namely, ξ, δUp, δpp, δUu, and δpu, which denote basic upwind dissipation, pressure difference-driven modification of interface fluxes, pressure difference-driven modification of pressure, velocity difference-driven modification of interface fluxes, and velocity difference-driven modification of pressure, respectively. Then, the role of each part in the LES of homogeneous decaying turbulence with a low Mach number is investigated. Results show that the parts δUu, δpp, and δUp have little effect on LES. Such minimal effect is integral to computational stability, especially for δUp. The large numerical dissipation is due to ξ and δpu, each of which features a larger dissipation than the sub-grid scale model. On the basis of these conditions, an improved all-speed Roe scheme for LES is proposed. This scheme can provide satisfactory LES results even for coarse grid resolutions with usually adopted second-order reconstructions for the finite volume method.

  4. AUTOMATED PARAMETRIC EXECUTION AND DOCUMENTATION FOR LARGE-SCALE SIMULATIONS

    SciTech Connect

    R. L. KELSEY; ET AL

    2001-03-01

    A language has been created to facilitate the automatic execution of simulations for purposes of enabling parametric study and test and evaluation. Its function is similar in nature to a job-control language, but more capability is provided in that the language extends the notion of literate programming to job control. Interwoven markup tags self document and define the job control process. The language works in tandem with another language used to describe physical systems. Both languages are implemented in the Extensible Markup Language (XML). A user describes a physical system for simulation and then creates a set of instructions for automatic execution of the simulation. Support routines merge the instructions with the physical-system description, execute the simulation the specified number of times, gather the output data, and document the process and output for the user. The language enables the guided exploration of a parameter space and can be used for simulations that must determine optimal solutions to particular problems. It is generalized enough that it can be used with any simulation input files that are described using XML. XML is shown to be useful as a description language, an interchange language, and a self-documented language.

  5. Plasma Cathodes as Electron Sources for Large Volume, High-Pressure Glow Discharges

    NASA Astrophysics Data System (ADS)

    Stark, Robert H.; Schoenbach, Karl H.

    1998-10-01

    A method to suppress the glow-to-arc transition in high pressure glow discharges is the use of a plasma cathode consisting of microhollow cathode discharges (MHCD) [1]. In our experiment a microhollow cathode discharge with a 100 micrometer diameter cathode hole and identical anode hole was used to provide electrons for a large volume main discharge, sustained between the hollow anode of the MHCD and a third electrode. Current and voltage characteristics, and the visual appearance of the main discharge and MHCD were studied in argon and air by using the micro plasma cathode as electron source. We are able to get stable dc operation in argon up to 1 atm and in air up to 600 torr. The main discharge is ignited when the current in the plasma cathode (MHCD), which is on the order of mA, reaches a threshold value. This threshold current increases with reduced applied voltage across the main gap. Above this transition the current in the main discharge is on the same order as the MHCD current and can be controlled by the MHCD current. Experiments with two MHCDs in parallel have indicated that large area high pressure stable glow discharges can be generated by using arrays of MHCDs as electron sources. [1] K. H. Schoenbach et al, Plasma Sources Sci. Techn. 6, 468 (1997). This work was solely funded by the Air Force Office of Scientific Research (AFOSR) in cooperation with the DDR&E Air Plasma Ramparts MURI program.

  6. High-rate Plastic Deformation of Nanocrystalline Tantalum to Large Strains: Molecular Dynamics Simulation

    SciTech Connect

    Rudd, R E

    2009-02-05

    Recent advances in the ability to generate extremes of pressure and temperature in dynamic experiments and to probe the response of materials has motivated the need for special materials optimized for those conditions as well as a need for a much deeper understanding of the behavior of materials subjected to high pressure and/or temperature. Of particular importance is the understanding of rate effects at the extremely high rates encountered in those experiments, especially with the next generation of laser drives such as at the National Ignition Facility. Here we use large-scale molecular dynamics (MD) simulations of the high-rate deformation of nanocrystalline tantalum to investigate the processes associated with plastic deformation for strains up to 100%. We use initial atomic configurations that were produced through simulations of solidification in the work of Streitz et al [Phys. Rev. Lett. 96, (2006) 225701]. These 3D polycrystalline systems have typical grain sizes of 10-20 nm. We also study a rapidly quenched liquid (amorphous solid) tantalum. We apply a constant volume (isochoric), constant temperature (isothermal) shear deformation over a range of strain rates, and compute the resulting stress-strain curves to large strains for both uniaxial and biaxial compression. We study the rate dependence and identify plastic deformation mechanisms. The identification of the mechanisms is facilitated through a novel technique that computes the local grain orientation, returning it as a quaternion for each atom. This analysis technique is robust and fast, and has been used to compute the orientations on the fly during our parallel MD simulations on supercomputers. We find both dislocation and twinning processes are important, and they interact in the weak strain hardening in these extremely fine-grained microstructures.

  7. Large eddy simulations as a parameterization tool for canopy-structure X VOC-flux interactions

    NASA Astrophysics Data System (ADS)

    Kenny, William; Bohrer, Gil; Chatziefstratiou, Efthalia

    2015-04-01

    We have been working to develop a new post-processing model - High resolution VOC Atmospheric Chemistry in Canopies (Hi-VACC) - which resolves the dispersion and chemistry of reacting chemical species given their emission rates from the vegetation and soil, driven by high resolution meteorological forcing and wind fields from various high resolution atmospheric regional and large-eddy simulations. Hi-VACC reads in fields of pressure, temperature, humidity, air density, short-wave radiation, wind (3-D u, v and w components) and sub-grid-scale turbulence that were simulated by a high resolution atmospheric model. This meteorological forcing data is provided as snapshots of 3-D fields. We have tested it using a number of RAMS-based Forest Large Eddy Simulation (RAFLES) runs. This can then be used for parameterization of the effects of canopy structure on VOC fluxes. RAFLES represents both drag and volume restriction by the canopy over an explicit 3-D domain. We have used these features to show the effects of canopy structure on fluxes of momentum, heat, and water in heterogeneous environments at the tree-crown scale by modifying the canopy structure representing it as both homogeneous and realistically heterogeneous. We combine this with Hi-VACC's capabilities to model dispersion and chemistry of reactive VOCs to parameterize the fluxes of these reactive species with respect to canopy structure. The high resolution capabilities of Hi-VACC coupled with RAFLES allows for sensitivity analysis to determine important structural considerations in sub-grid-scale parameterization of these phenomena in larger models.

  8. The oligocene Lund Tuff, Great Basin, USA: A very large volume monotonous intermediate

    USGS Publications Warehouse

    Maughan, L.L.; Christiansen, E.H.; Best, M.G.; Gromme, C.S.; Deino, A.L.; Tingey, D.G.

    2002-01-01

    Unusual monotonous intermediate ignimbrites consist of phenocryst-rich dacite that occurs as very large volume (> 1000 km3) deposits that lack systematic compositional zonation, comagmatic rhyolite precursors, and underlying plinian beds. They are distinct from countless, usually smaller volume, zoned rhyolite-dacite-andesite deposits that are conventionally believed to have erupted from magma chambers in which thermal and compositional gradients were established because of sidewall crystallization and associated convective fractionation. Despite their great volume, or because of it, monotonous intermediates have received little attention. Documentation of the stratigraphy, composition, and geologic setting of the Lund Tuff - one of four monotonous intermediate tuffs in the middle-Tertiary Great Basin ignimbrite province - provides insight into its unusual origin and, by implication, the origin of other similar monotonous intermediates. The Lund Tuff is a single cooling unit with normal magnetic polarity whose volume likely exceeded 3000 km3. It was emplaced 29.02 ?? 0.04 Ma in and around the coeval White Rock caldera which has an unextended north-south diameter of about 50 km. The tuff is monotonous in that its phenocryst assemblage is virtually uniform throughout the deposit: plagioclase > quartz ??? hornblende > biotite > Fe-Ti oxides ??? sanidine > titanite, zircon, and apatite. However, ratios of phenocrysts vary by as much as an order of magnitude in a manner consistent with progressive crystallization in the pre-eruption chamber. A significant range in whole-rock chemical composition (e.g., 63-71 wt% SiO2) is poorly correlated with phenocryst abundance. These compositional attributes cannot have been caused wholly by winnowing of glass from phenocrysts during eruption, as has been suggested for the monotonous intermediate Fish Canyon Tuff. Pumice fragments are also crystal-rich, and chemically and mineralogically indistinguishable from bulk tuff. We

  9. Large-scale simulation of the human arterial tree.

    PubMed

    Grinberg, L; Anor, T; Madsen, J R; Yakhot, A; Karniadakis, G E

    2009-02-01

    1. Full-scale simulations of the virtual physiological human (VPH) will require significant advances in modelling, multiscale mathematics, scientific computing and further advances in medical imaging. Herein, we review some of the main issues that need to be resolved in order to make three-dimensional (3D) simulations of blood flow in the human arterial tree feasible in the near future. 2. A straightforward approach is computationally prohibitive even on the emerging petaflop supercomputers, so a three-level hierarchical approach based on vessel size is required, consisting of: (i) a macrovascular network (MaN); (ii) a mesovascular network (MeN); and (iii) a microvascular network (MiN). We present recent simulations of MaN obtained by solving the 3D Navier-Stokes equations on arterial networks with tens of arteries and bifurcations and accounting for the neglected dynamics through proper boundary conditions. 3. A multiscale simulation coupling MaN-MeN-MiN and running on hundreds of thousands of processors on petaflop computers will require no more than a few CPU hours per cardiac cycle within the next 5 years. The rapidly growing capacity of supercomputing centres opens up the possibility of simulation studies of cardiovascular diseases, drug delivery, perfusion in the brain and other pathologies. PMID:18671721

  10. Analytical and Experimental Investigation of Mixing in Large Passive Containment Volumes

    SciTech Connect

    Per F. Peterson

    2002-10-17

    This final report details results from the past three years of the three-year UC Berkeley NEER investigation of mixing phenomena in large-scale passive reactor containments. We have completed all of our three-year deliverables specified in our proposal, as summarized for each deliverable in the body of this report, except for the experiments of steam condensation in the presence of noncondensable gas. We have particularly exiting results from the experiments studying the mixing in large insulated containment with a vertical cooling plate. These experiments now have shown why augmentation has been observed in wall-condensation experiments due to the momentum of the steam break-flow entering large volumes. More importantly, we also have shown that the forced-jet augmentation can be predicted using relatively simple correlations, and that it is independent of the break diameter and depends only on the break flow orientation, location, and momentum. This suggests that we will now be able to take credit for this augmentation in reactor safety analysis, improving safety margins for containment structures. We have finished the version 1 of 1-D Lagrangian flow and heat transfer code BMIX++. This version has ability to solve many complex stratified problems, such as multi-components problems, multi-enclosures problems (two enclosures connected by one connection for the current version), incompressible and compressible problems, multi jets, plumes, sinks in one enclosure problems, problems with wall conduction, and the combinations of the above problems. We believe the BMIX++ code is a very powerful computation tool to study stratified enclosures mixing problems.

  11. Applications of large-volume sampling assemblies for the determination of organochlorines in seawater

    SciTech Connect

    Risebrough, R.W.; Lappe, B.W. de; Ramer, R.

    1995-12-31

    In the 1970s an overly ambitious attempt to construct a global mass balance of PCBs was thwarted by the difficulties in obtaining credible values of their seawater concentrations. Concepts of transfer processes have since shifted from a simplistic one-way passage of PCBs from land to sea to continuous exchanges between and among all local media, including transfer from seawater to the atmosphere, with the net fluxes determined by local chemical potentials. Seawater measurements continue to be critically important. The authors describe the latest in a series of sampling assemblies for the determination of PCBs and other organochlorines in natural waters. Each has used glass fiber filters for the collection of particles and a high-density porous polyurethane foam for extraction from the seawater phase. The latest versions provide for the prevention of channeling around the foam medium, forcing water through the foam, and for the possibility of the analysis of separate modular units to estimate recoveries. Sample volumes have ranged from 100 to 3,600 liters at sites in coastal California and San Francisco Bay, the eastern Pacific, and coastal Catalonia. The latest version (1995) addresses and at least partially corrects the principal deficiencies of earlier versions --, the large volume of solvents and the considerable personnel time required in sample workup. The authors present recovery data for PCBs, other organochlorines, PAHs, and several herbicides. In the eastern Pacific, PCBs were not detected at a sensitivity level in the order of 1 pg/liter; toxaphene and alpha-HCH were the most abundant organochlorines at those sites. They are now somewhat closer to the goal of formulating global mass balance equations and of estimating global inventories of these contaminants.

  12. Simulating future trends in hydrological regime of a large Sudano-Sahelian catchment under climate change

    NASA Astrophysics Data System (ADS)

    Ruelland, D.; Ardoin-Bardin, S.; Collet, L.; Roucou, P.

    2012-03-01

    SummaryThis paper assesses the future variability of water resources in the short, medium and long terms over a large Sudano-Sahelian catchment in West Africa. Flow simulations were performed with a daily conceptual model. A period of nearly 50 years (1952-2000) was chosen to capture long-term hydro-climatic variability. Calibration and validation were performed on the basis of a multi-objective function that aggregates a variety of goodness-of-fit indices. The climate models HadCM3 and MPI-M under SRES-A2 were used to provide future climate scenarios over the catchment. Outputs from these models were used to generate daily rainfall and temperature series for the 21st century according to: (i) the unbias and delta methods application and (ii) spatial and temporal downscaling. A temperature-based formula was used to calculate present and future potential evapotranspiration (PET). The daily rainfall and PET series were introduced into the calibrated and validated hydrological model to simulate future discharge. The model correctly reproduces the observed discharge at the basin outlet. The Nash-Sutcliffe efficiency criterion is over 89% for both calibration and validation periods, and the volume error between simulation and observation is close to null for the overall considered period. With regard to future climate, the results show clear trends of reduced rainfall over the catchment. This rainfall deficit, together with a continuing increase in potential evapotranspiration, suggests that runoff from the basin could be substantially reduced, especially in the long term (60-65%), compared to the 1961-1990 reference period. As a result, the long-term hydrological simulations show that the catchment discharge could decrease to the same levels as those observed during the severe drought of the 1980s.

  13. Large-scale multi-agent transportation simulations

    NASA Astrophysics Data System (ADS)

    Cetin, Nurhan; Nagel, Kai; Raney, Bryan; Voellmy, Andreas

    2002-08-01

    It is now possible to microsimulate the traffic of whole metropolitan areas with 10 million travelers or more, "micro" meaning that each traveler is resolved individually as a particle. In contrast to physics or chemistry, these particles have internal intelligence; for example, they know where they are going. This means that a transportation simulation project will have, besides the traffic microsimulation, modules which model this intelligent behavior. The most important modules are for route generation and for demand generation. Demand is generated by each individual in the simulation making a plan of activities such as sleeping, eating, working, shopping, etc. If activities are planned at different locations, they obviously generate demand for transportation. This however is not enough since those plans are influenced by congestion which initially is not known. This is solved via a relaxation method, which means iterating back and forth between the activities/routes generation and the traffic simulation.

  14. Manufacturing Process Simulation of Large-Scale Cryotanks

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  15. Modeling of Large Avionic Structures in Electrical Network Simulations

    NASA Astrophysics Data System (ADS)

    Piche, A.; Perraud, R.; Lochot, C.

    2012-05-01

    The extensive introduction of carbon fiber reinforced plastics (CFRP) in conjunction with an increase of electrical systems in aircraft has led to new electromagnetic issues. This situation has reinforced the need for numerical simulation early in the design phase. In this context, we have proposed [1] a numerical methodology to deal with 3D CFRP avionic structures in time domain simulations at system level. This paper presents the last results on this subject and particularly the modeling of A350 fuselage in SABER computation containing the aircraft power distribution.

  16. Large-scale molecular dynamics simulations of fracture and deformation

    NASA Astrophysics Data System (ADS)

    Zhou, S. J.; Beazley, D. M.; Lomdahl, P. S.; Holian, B. L.

    1996-08-01

    We have discussed the prospects of applying massively parallel molecular dynamics simulation to investigate brittle versus ductile fracture behaviors and dislocation intersection. This idea is illustrated by simulating dislocation emission from a three-dimensional crack. Unprecedentedly, the dislocation loops emitted from the crack fronts have been observed. It is found that dislocation-emission modes, jogging or blunting, are very sensitive to boundary conditions and interatomic potentials. These 3D phenomena can be effectively visualized and analyzed by a new technique, namely, plotting only those atoms within the certain ranges of local potential energies.

  17. Simulations of the formation of large-scale structure

    NASA Astrophysics Data System (ADS)

    White, S. D. M.

    Numerical studies related to the simulation of structure growth are examined. The linear development of fluctuations in the early universe is studied. The research of Aarseth, Gott, and Turner (1979) based on N-body integrators that obtained particle accelerations by direct summation of the forces due to other objects is discussed. Consideration is given to the 'pancake theory' of Zel'dovich (1970) for the evolution from adiabatic initial fluctuation, the neutrino-dominated universe models of White, Frenk, and Davis (1983), and the simulations of Davis et al. (1985).

  18. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  19. The influence of large-scale structures on entrainment in a decelerating transient turbulent jet revealed by large eddy simulation

    NASA Astrophysics Data System (ADS)

    Hu, Bing; Musculus, Mark P. B.; Oefelein, Joseph C.

    2012-04-01

    To provide a better understanding of the fluid mechanical mechanisms governing entrainment in decelerating jets, we performed a large eddy simulation (LES) of a transient air jet. The ensemble-averaged LES calculations agree well with the available measurements of centerline velocity, and they reveal a region of increased entrainment that grows as it propagates downstream during deceleration. Within the temporal and spatial domains of the simulation, entrainment during deceleration temporarily increases by roughly a factor of two over that of the quasi-steady jet, and thereafter decays to a level lower than the quasi-steady jet. The LES results also provide large-structure flow details that lend insight into the effects of deceleration on entrainment. The simulations show greater growth and separation of large vortical structures during deceleration. Ambient fluid is engulfed into the gaps between the large-scale structures, causing large-scale indentations in the scalar jet boundary. The changes in the growth and separation of large structures during deceleration are attributed to changes in the production and convection of vorticity. Both the absolute and normalized scalar dissipation rates decrease during deceleration, implying that changes in small-scale mixing during deceleration do not play an important role in the increased entrainment. Hence, the simulations predict that entrainment in combustion devices may be controlled by manipulating the fuel-jet boundary conditions, which affect structures at large scales much more than at small scales.

  20. Slug sizing/slug volume prediction, state of the art review and simulation

    SciTech Connect

    Burke, N.E.; Kashou, S.F.

    1995-12-01

    Slug flow is a flow pattern commonly encountered in offshore multiphase flowlines. It is characterized by an alternate flow of liquid slugs and gas pockets, resulting in an unsteady hydrodynamic behavior. All important design variables, such as slug length and slug frequency, liquid holdup, and pressure drop, vary with time and this makes the prediction of slug flow characteristics both difficult and challenging. This paper reviews the state of the art methods in slug catcher sizing and slug volume predictions. In addition, history matching of measured slug flow data is performed using the OLGA transient simulator. This paper reviews the design factors that impact slug catcher sizing during steady state, during transient, during pigging, and during operations under a process control system. The slug tracking option of the OLGA simulator is applied to predict the slug length and the slug volume during a field operation. This paper will also comment on the performance of common empirical slug prediction correlations.

  1. The big fat LARS - a LArge Reservoir Simulator for hydrate formation and gas production

    NASA Astrophysics Data System (ADS)

    Beeskow-Strauch, Bettina; Spangenberg, Erik; Schicks, Judith M.; Giese, Ronny; Luzi-Helbing, Manja; Priegnitz, Mike; Klump, Jens; Thaler, Jan; Abendroth, Sven

    2013-04-01

    Simulating natural scenarios on lab scale is a common technique to gain insight into geological processes with moderate effort and expenses. Due to the remote occurrence of gas hydrates, their behavior in sedimentary deposits is largely investigated on experimental set ups in the laboratory. In the framework of the submarine gas hydrate research project (SUGAR) a large reservoir simulator (LARS) with an internal volume of 425 liter has been designed, built and tested. To our knowledge this is presently a word-wide unique set up. Because of its large volume it is suitable for pilot plant scale tests on hydrate behavior in sediments. That includes not only the option of systematic tests on gas hydrate formation in various sedimentary settings but also the possibility to mimic scenarios for the hydrate decomposition and subsequent natural gas extraction. Based on these experimental results various numerical simulations can be realized. Here, we present the design and the experimental set up of LARS. The prerequisites for the simulation of a natural gas hydrate reservoir are porous sediments, methane, water, low temperature and high pressure. The reservoir is supplied by methane-saturated and pre-cooled water. For its preparation an external gas-water mixing stage is available. The methane-loaded water is continuously flushed into LARS as finely dispersed fluid via bottom-and-top-located sparger. The LARS is equipped with a mantle cooling system and can be kept at a chosen set temperature. The temperature distribution is monitored at 14 reasonable locations throughout the reservoir by Pt100 sensors. Pressure needs are realized using syringe pump stands. A tomographic system, consisting of a 375-electrode-configuration is attached to the mantle for the monitoring of hydrate distribution throughout the entire reservoir volume. Two sets of tubular polydimethylsiloxan-membranes are applied to determine gas-water ratio within the reservoir using the effect of permeability

  2. Feasibility study for a numerical aerodynamic simulation facility. Volume 2: Hardware specifications/descriptions

    NASA Technical Reports Server (NTRS)

    Green, F. M.; Resnick, D. R.

    1979-01-01

    An FMP (Flow Model Processor) was designed for use in the Numerical Aerodynamic Simulation Facility (NASF). The NASF was developed to simulate fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The facility is applicable to studying aerodynamic and aircraft body designs. The following general topics are discussed in this volume: (1) FMP functional computer specifications; (2) FMP instruction specification; (3) standard product system components; (4) loosely coupled network (LCN) specifications/description; and (5) three appendices: performance of trunk allocation contention elimination (trace) method, LCN channel protocol and proposed LCN unified second level protocol.

  3. Cavitation Simulation with Consideration of the Viscous Effect at Large Liquid Temperature Variation

    NASA Astrophysics Data System (ADS)

    Yu, An; Luo, Xian-Wu; Ji, Bin; Huang, Ren-Fang; Hidalgo, Victor; Kim, Song Hak

    2014-08-01

    The phase change due to cavitation is not only driven by the pressure difference between the local pressure and vapor saturated pressure, but also affected by the physical property changes in the case of large liquid temperature variation. The present work simulates cavitation with consideration of the viscous effect as well as the local variation of vapor saturated pressure, density, etc. A new cavitation model is developed based on the bubble dynamics, and is applied to analyze the cavitating flow around an NACA0015 hydrofoil at different liquid temperatures from 25°C to 150°C. The results by the proposed model, such as the pressure distribution along the hydrofoil wall surface, vapor volume fraction, and source term of the mass transfer rate due to cavitation, are compared with the available experimental data and the numerical results by an existing thermodynamic model. It is noted that the numerical results by the proposed cavitation model have a slight discrepancy from the experimental results at room temperature, and the accuracy is better than the existing thermodynamic cavitation model. Thus the proposed cavitation model is acceptable for the simulation of cavitating flows at different liquid temperatures.

  4. Large-eddy simulation of nitrogen injection at trans- and supercritical conditions

    NASA Astrophysics Data System (ADS)

    Müller, Hagen; Niedermeier, Christoph A.; Matheis, Jan; Pfitzner, Michael; Hickel, Stefan

    2016-01-01

    Large-eddy simulations (LESs) of cryogenic nitrogen injection into a warm environment at supercritical pressure are performed and real-gas thermodynamics models and subgrid-scale (SGS) turbulence models are evaluated. The comparison of different SGS models — the Smagorinsky model, the Vreman model, and the adaptive local deconvolution method — shows that the representation of turbulence on the resolved scales has a notable effect on the location of jet break-up, whereas the particular modeling of unresolved scales is less important for the overall mean flow field evolution. More important are the models for the fluid's thermodynamic state. The injected fluid is either in a supercritical or in a transcritical state and undergoes a pseudo-boiling process during mixing. Such flows typically exhibit strong density gradients that delay the instability growth and can lead to a redistribution of turbulence kinetic energy from the radial to the axial flow direction. We evaluate novel volume-translation methods on the basis of the cubic Peng-Robinson equation of state in the framework of LES. At small extra computational cost, their application considerably improves the simulation results compared to the standard formulation. Furthermore, we found that the choice of inflow temperature is crucial for the reproduction of the experimental results and that heat addition within the injector can affect the mean flow field in comparison to results with an adiabatic injector.

  5. Low-Dissipation Advection Schemes Designed for Large Eddy Simulations of Hypersonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    White, Jeffrey A.; Baurle, Robert A.; Fisher, Travis C.; Quinlan, Jesse R.; Black, William S.

    2012-01-01

    The 2nd-order upwind inviscid flux scheme implemented in the multi-block, structured grid, cell centered, finite volume, high-speed reacting flow code VULCAN has been modified to reduce numerical dissipation. This modification was motivated by the desire to improve the codes ability to perform large eddy simulations. The reduction in dissipation was accomplished through a hybridization of non-dissipative and dissipative discontinuity-capturing advection schemes that reduces numerical dissipation while maintaining the ability to capture shocks. A methodology for constructing hybrid-advection schemes that blends nondissipative fluxes consisting of linear combinations of divergence and product rule forms discretized using 4th-order symmetric operators, with dissipative, 3rd or 4th-order reconstruction based upwind flux schemes was developed and implemented. A series of benchmark problems with increasing spatial and fluid dynamical complexity were utilized to examine the ability of the candidate schemes to resolve and propagate structures typical of turbulent flow, their discontinuity capturing capability and their robustness. A realistic geometry typical of a high-speed propulsion system flowpath was computed using the most promising of the examined schemes and was compared with available experimental data to demonstrate simulation fidelity.

  6. Comparing selected morphological models of hydrated Nafion using large scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Knox, Craig K.

    Experimental elucidation of the nanoscale structure of hydrated Nafion, the most popular polymer electrolyte or proton exchange membrane (PEM) to date, and its influence on macroscopic proton conductance is particularly challenging. While it is generally agreed that hydrated Nafion is organized into distinct hydrophilic domains or clusters within a hydrophobic matrix, the geometry and length scale of these domains continues to be debated. For example, at least half a dozen different domain shapes, ranging from spheres to cylinders, have been proposed based on experimental SAXS and SANS studies. Since the characteristic length scale of these domains is believed to be ˜2 to 5 nm, very large molecular dynamics (MD) simulations are needed to accurately probe the structure and morphology of these domains, especially their connectivity and percolation phenomena at varying water content. Using classical, all-atom MD with explicit hydronium ions, simulations have been performed to study the first-ever hydrated Nafion systems that are large enough (~2 million atoms in a ˜30 nm cell) to directly observe several hydrophilic domains at the molecular level. These systems consisted of six of the most significant and relevant morphological models of Nafion to-date: (1) the cluster-channel model of Gierke, (2) the parallel cylinder model of Schmidt-Rohr, (3) the local-order model of Dreyfus, (4) the lamellar model of Litt, (5) the rod network model of Kreuer, and (6) a 'random' model, commonly used in previous simulations, that does not directly assume any particular geometry, distribution, or morphology. These simulations revealed fast intercluster bridge formation and network percolation in all of the models. Sulfonates were found inside these bridges and played a significant role in percolation. Sulfonates also strongly aggregated around and inside clusters. Cluster surfaces were analyzed to study the hydrophilic-hydrophobic interface. Interfacial area and cluster volume

  7. Annealing as grown large volume CZT single crystals increased spectral resolution

    SciTech Connect

    Dr. Longxia Li

    2008-03-19

    The spectroscopic performance of current large-volume Cadmium 10% Zinc Telluride, Cd{sub 0.9}Zn{sub 0.1}Te, (CZT) detectors is impaired by cumulative effect of tellurium precipitates (secondary phases) presented in CZT single-crystal grown by low-pressure Bridgman techniques(1). This statistical effect may limit the energy resolution of large-volume CZT detectors (typically 2-5% at 662 keV for 12-mm thick devices). The stochastic nature of the interaction prevents the use of any electronic or digital charge correction techniques without a significant reduction in the detector efficiency. This volume constraint hampers the utility of CZT since the detectors are inefficient at detecting photons >1MeV and/or in low fluency situations. During the project, seven runs CZT ingots have been grown, in these ingots the indium dopant concentrations have been changed in the range between 0.5ppm to 6ppm. The I-R mapping imaging method has been employed to study the Te-precipitates. The Teprecipitates in as-grown CZT wafers, and after annealing wafers have been systematically studied by using I-R mapping system (home installed, resolution of 1.5 {micro}m). We employed our I-R standard annealing CZT (Zn=4%) procedure or two-steps annealing into radiation CZT (Zn=10%), we achieved the 'non'-Te precipitates (size < 1 {micro}m) CZT n+-type with resistivity > 10{sup 9-10} {Omega}-cm. We believe that the Te-precipitates are the p-type defects, its reducing number causes the CZT became n+-type, therefore we varied or reduced the indium dapant concentration during the growth and changed the Te-precipitates size and density by using different Cd-temperature and different annealing procedures. We have made the comparisons among Te-precipitates size, density and Indium dopant concentrations, and we found that the CZT with smaller size of Te-precipitates is suitable for radiation uses but non-Te precipitates is impossible to be used in the radiation detectors, because the CZT would became

  8. Photoperiod is associated with hippocampal volume in a large community sample

    PubMed Central

    Miller, Megan A.; Leckie, Regina L.; Donofry, Shannon D.; Gianaros, Peter J.; Erickson, Kirk I.; Manuck, Stephen B.; Roecklein, Kathryn A.

    2015-01-01

    Although animal research has demonstrated seasonal changes in hippocampal volume, reflecting seasonal neuroplasticity, seasonal differences in human hippocampal volume have yet to be documented. Hippocampal volume has also been linked to depressed mood, a seasonally varying phenotype. Therefore, we hypothesized that seasonal differences in day-length (i.e., photoperiod) would predict differences in hippocampal volume, and that this association would be linked to low mood. Healthy participants aged 30–54 (M = 43; SD = 7.32) from the University of Pittsburgh Adult Health and Behavior II project (n = 404; 53% female) were scanned in a 3T MRI scanner. Hippocampal volumes were determined using an automated segmentation algorithm using FreeSurfer. A mediation model tested whether hippocampal volume mediated the relationship between photoperiod and mood. Secondary analyses included seasonally fluctuating variables (i.e., sleep and physical activity) which have been shown to influence hippocampal volume. Shorter photoperiods were significantly associated with higher BDI scores (R2= 0.01, β =−0.12, p = 0.02) and smaller hippocampal volumes (R2= 0.40, β = 0.08, p = 0.04). However, due to the lack of an association between hippocampal volume and Beck Depression Inventory scores in the current sample, the mediation hypothesis was not supported. This study is the first to demonstrate an association between season and hippocampal volume. These data offer preliminary evidence that human hippocampal plasticity could be associated with photoperiod and indicates a need for longitudinal studies. PMID:25394737

  9. Radiometric Dating of Large Volume Flank Collapses in The Lesser Antilles Arc.

    NASA Astrophysics Data System (ADS)

    Quidelleur, X.; Samper, A.; Boudon, G.; Le Friant, A.; Komorowski, J.

    2004-12-01

    It is now admitted that flank collapses, probably triggered by magmatic inflation and/or gravitational collapses, is a recurrent process of the evolution of the Lesser Antilles Arc volcanoes. Large magnitude debris avalanche deposits have been identified offshore, in the Grenada basin (Deplus et al., 2001; Le Friant et al., 2001). The widest extensions have been observed off the coast of Dominica and St Lucia, with associated volumes up to 20 km3. Another large-scale event, with marine evidences probably covered by sediments and latter flank collapses, has been inferred onland from morphological evidences and characteristic deposits of the Carbets structure in Martinique. We present radiometric dating of these three major events using the K-Ar Cassignol-Gillot technique performed on selected groundmass. Both volcanic formations preceding flank collapses (remnants of the horseshoe shaped structures or basal lava flows) and following landslides (lava domes) have been dated. In the Qualibou depression of St. Lucia, the former structure has been dated at 1096+-16 ka and the collapse constrained by dome emplacement prior to 97+-2 ka (Petit Piton). In Dominica, several structures have been associated with repetitive flank collapse events inferred from marine data (Le Friant et al., 2002). The Plat-Pays event probably occurred after 96+-2 ka. Inside the inherited depression, Scotts Head, which is interpreted as a proximal pluri-kilometric megabloc from the Soufriere avalanche, has been dated at 14+-1 ka, providing an older bound for this event. In Martinique Island, three different domes within the Carbets structure have been dated at 335+-5 ka. Assuming a rapid magma emplacement following pressure release due to deloading, this constrains the age of this high magnitude event. Finally, these results obtained from three of the most voluminous flank collapses provide constraints to estimate the recurrence of these events, which represent one of the major hazards associated

  10. Radiometric dating of three large volume flank collapses in the Lesser Antilles Arc

    NASA Astrophysics Data System (ADS)

    Samper, A.; Quidelleur, X.; Boudon, G.; Le Friant, A.; Komorowski, J. C.

    2008-10-01

    It is now recognised that flank collapses are a recurrent process in the evolution of the Lesser Antilles Arc volcanoes. Large magnitude debris-avalanche deposits have been identified off the coast of Dominica, Martinique and St. Lucia, with associated volumes up to 20 km 3 [Deplus, C., Le Friant, A., Boudon, G., Komorowski, J.-C., Villemant, B., Harford, C., Ségoufin, J., Cheminée, J.-L., 2001. Submarine evidence for large-scale debris avalanches in the Lesser Antilles Arc. Earth Planet. Sci. Lett., 192: 145-157.]. We present new radiometric dating of three major events using the K-Ar Cassignol-Gillot technique. In the Qualibou depression of St. Lucia, a collapse has been constrained by dome emplacement prior to 95 ± 2 ka. In Dominica, where repetitive flank collapse events have occurred [Le Friant, A., Boudon, G., Komorowski, J.-C., Deplus, C., 2002. L'île de la Dominique, à l'origine des avalanches de débris les plus volumineuses de l'arc des Petites Antilles. C.R. Geoscience, 334: 235-243], the Plat Pays event probably occurred after 96 ± 2 ka. Inside the depression caused by this event, Scotts Head, which is interpreted as a proximal megabloc from the subsequent Soufriere avalanche event has been dated at 14 ± 1 ka, providing an older bound for this event. On Martinique three different domes within the Carbets structure dated at 337 ± 5 ka constrain the age of this high magnitude event. Finally, these results obtained from three of the most voluminous flank collapses provide constraints to estimate the recurrence of these events, which represent one of the major hazards associated with volcanoes of the Lesser Antilles Arc.

  11. High-resolution and large-volume tomography reconstruction for x-ray microscopy

    NASA Astrophysics Data System (ADS)

    Cheng, Chang-Chieh; Hwu, Yeukuang; Ching, Yu-Tai

    2016-03-01

    This paper presents a method of X-ray image acquisition for the high-resolution tomography reconstruction that uses a light source of synchrotron radiation to reconstruct a three-dimensional tomographic volume dataset for a nanoscale object. For large objects, because of the limited field-of-view, a projection image of an object should to be taken by several shots from different locations, and using an image stitching method to combine these image blocks together. In this study, the overlap of image blocks should be small because our light source is the synchrotron radiation and the X-ray dosage should be minimized as possible. We use the properties of synchrotron radiation to enable the image stitching and alignment success when the overlaps between adjacent image blocks are small. In this study, the size of overlaps can reach to 15% of the size of each image block. During the reconstruction, the mechanical stability should be considered because it leads the misalignment problem in tomography. We adopt the feature-based alignment

  12. Twinning in vapour-grown, large volume Cd1-xZnxTe crystals

    NASA Astrophysics Data System (ADS)

    Tanner, B. K.; Mullins, J. T.; Pym, A. T. G.; Maneuski, D.

    2016-08-01

    The onset of twinning from (2 bar 1 bar 1 bar) to (1 bar 3 bar 3 bar) in large volume Cd1-xZnxTe crystals, grown by vapour transport on (2 bar 1 bar 1 bar) , often referred to as (211)B, oriented GaAs seeds, has been investigated using X-ray diffraction imaging (X-ray topography). Twinning is not associated with strains at the GaAs/CdTe interface as the initial growth was always in (2 bar 1 bar 1 bar) orientation. Nor is twinning related to lattice strains associated with injection of Zn subsequent to initial nucleation and growth of pure CdTe as in both cases twinning occurred after growth of several mm length of Cd1-xZnxTe. While in both cases examined, there was a region of disturbed growth prior to the twinning transition, in neither crystal does this strain appear to have nucleated the twinning process. In both cases, un-twinned material remained after twinning was observed, the scale of the resulting twin boundaries being sub-micron. Simultaneous twinning across the whole sample surface was observed in one sample, whereas in the other, twinning was nucleated at different points and times in the growth.

  13. Translational and Brownian motion in laser-Doppler flowmetry of large tissue volumes.

    PubMed

    Binzoni, T; Leung, T S; Seghier, M L; Delpy, D T

    2004-12-21

    This study reports the derivation of a precise mathematical relationship existing between the different p-moments of the power spectrum of the photoelectric current, obtained from a laser-Doppler flowmeter (LDF), and the red blood cell speed. The main purpose is that both the Brownian (defining the 'biological zero') and the translational movements are taken into account, clarifying in this way what the exact contribution of each parameter is to the LDF derived signals. The derivation of the equations is based on the quasi-elastic scattering theory and holds for multiple scattering (i.e. measurements in large tissue volumes and/or very high red blood cell concentration). The paper also discusses why experimentally there exists a range in which the relationship between the first moment of the power spectrum and the average red blood cells speed may be considered as 'linear' and what are the physiological determinants that can result in nonlinearity. A correct way to subtract the biological zero from the LDF data is also proposed. The findings should help in the design of improved LDF instruments and in the interpretation of experimental data.

  14. Large Volume Calorimeter Comparison Measurement Results Collected at the Los Alamos National Laboratory Plutonium Facility.

    SciTech Connect

    Bracken, D. S.

    2005-01-01

    A calorimeter capable of measuring the power output from special nuclear material in 208-liter (55-gal) shipping or storatge containers was designed and fabricated at Los Alamos National Laboratory (LANL). This high-sensitivity, large-volume calorimeter (LVC) provides a reliable NDA method to measure many difficult-to-assay forms of plutonium and tritium more accurately. The entire calorimeter is 104 cm wide x 157 cm deep x 196 cm high in the closed position. The LVC also requires space for a standard electronics rack. A standard 208-1 drum with a 60-cm-diameter retaining ring with bolt will fit into the LVC measurement chamber. With careful positioning, cylindrical items up to 66 cm in diameter and 100 cm tall can be assayed in the LVC. The LVC was used to measure numerous plutonium-bearing items in 208-1 drums at the Los Alamos Plutonium Facility. Measurement results from real waste drums that were previously assayed using multiple NDA systems are compared with the LVC results. The calorimeter previously performed well under laboratory conditions using Pu-238 heat standards. The in-plant instrument performance is compared with the laboratory performance. Assay times, precision, measurement threshold, and operability of the LVC are also presented.

  15. Practical gamma spectroscopy assay techniques for large volume low-level waste boxes.

    SciTech Connect

    Myers, S. C.; Gruetzmacher, K. M.; Scheffing, C. C.; Gallegos, L. E.; Bustos, R. M.

    2002-01-01

    A study was conducted at the Los Alamos National Laboratory (LANL) to evaluate the performance of the SNAPrM (Spectral Nondestructive Assay Platform) analytical software for measurements of known standards in large metal waste boxes (2.5 m' volume). The trials were designed to test the accuracy and variance of the analytical results for low-density combustible matrices and higher-density metal matrices at two discrete gamma-ray energies: 121.78 keV and 411.12 keV. For both matrix types the measurement method that produced the most accurate results with the lowest associated standard deviation involved combining four individual measurements taken at the geometric center of each of the box's four vertical sides. With this method the overall bias and the standard deviation amongst 24 individual results for the 121.78 keV and 411.12 keV gamma rays were as follows: 3.38% (k 20.19%) and 3.68% (k 15.47%) for the combustible matrix and 37,88% (k 67.64%) and 9.38% (k 33.15%) for the metal matrix. The persistent positive bias from measurements of the metal box is believed to be a result of a nonhomogenously distributed matrix.

  16. Multi-stage polymer systems for the autonomic regeneration of large damage volumes

    NASA Astrophysics Data System (ADS)

    Santa Cruz, Windy Ann

    Recovery of catastrophic damage requires a robust chemistry capable of addressing the complex challenges encountered by autonomic regeneration. Although self-healing polymers have the potential to increase material lifetimes and safety, these systems have been limited to recovery of internal microcracks and surface damage. Current technologies thereby fail to address the restoration of large, open damage volumes. A regenerative chemistry was developed by incorporating a gel scaffold within liquid healing agents. The healing system undergoes two stages, sol-gel and gel-polymer. Stage 1, rapid formation of a crosslinked gel, creates a synthetic support for the healing agents as they deposit across the damage region. Stage 2 comprises the polymerization of monomer using a room temperature redox initiation system to recover the mechanical properties of the substrate. The two stages are chemically compatible and only react when a specific reaction trigger is introduced -- an acid catalyst for gelation and initiator-promoter for polymerization. Cure kinetics, chemical and mechanical properties can be tuned by employing different monomer systems. The versatile gelation chemistry gels over 20 vinyl monomers to yield both thermoplastic and thermosetting polymers. The healing efficacy of the two-stage system was studied in thin, vascularized epoxy sheets. By splitting the chemistry into two low viscosity fluids, we demonstrated regeneration of gaps up to 9 mm in diameter. The combination of microvascular networks and a new healing chemistry demonstrates an innovative healing system that significantly exceeds the performance of traditional methods.

  17. A new large-volume metal reference standard for radioactive waste management.

    PubMed

    Tzika, F; Hult, M; Stroh, H; Marissens, G; Arnold, D; Burda, O; Kovář, P; Suran, J; Listkowska, A; Tyminski, Z

    2016-03-01

    A new large-volume metal reference standard has been developed. The intended use is for calibration of free-release radioactivity measurement systems and is made up of cast iron tubes placed inside a box of the size of a Euro-pallet (80 × 120 cm). The tubes contain certified activity concentrations of (60)Co (0.290 ± 0.006 Bq g(-1)) and (110m)Ag (3.05 ± 0.09 Bq g(-1)) (reference date: 30 September 2013). They were produced using centrifugal casting from a smelt into which (60)Co was first added and then one piece of neutron irradiated silver wire was progressively diluted. The iron castings were machined to the desirable dimensions. The final material consists of 12 iron tubes of 20 cm outer diameter, 17.6 cm inner diameter, 40 cm length/height and 245.9 kg total mass. This paper describes the reference standard and the process of determining the reference activity values. PMID:25977349

  18. Detecting Boosted Dark Matter from the Sun with Large Volume Neutrino Detectors

    SciTech Connect

    Berger, Joshua; Cui, Yanou; Zhao, Yue; /Stanford U., ITP /Stanford U., Phys. Dept.

    2015-04-02

    We study novel scenarios where thermal dark matter (DM) can be efficiently captured in the Sun and annihilate into boosted dark matter. In models with semi-annihilating DM, where DM has a non-minimal stabilization symmetry, or in models with a multi-component DM sector, annihilations of DM can give rise to stable dark sector particles with moderate Lorentz boosts. We investigate both of these possibilities, presenting concrete models as proofs of concept. Both scenarios can yield viable thermal relic DM with masses O(1)-O(100) GeV. Taking advantage of the energetic proton recoils that arise when the boosted DM scatters off matter, we propose a detection strategy which uses large volume terrestrial detectors, such as those designed to detect neutrinos or proton decays. In particular, we propose a search for proton tracks pointing towards the Sun. We focus on signals at Cherenkov-radiation-based detectors such as Super-Kamiokande (SK) and its upgrade Hyper-Kamiokande (HK). We find that with spin-dependent scattering as the dominant DM-nucleus interaction at low energies, boosted DM can leave detectable signals at SK or HK, with sensitivity comparable to DM direct detection experiments while being consistent with current constraints. Our study provides a new search path for DM sectors with non-minimal structure.

  19. Analysis of Nucleosides in Municipal Wastewater by Large-Volume Liquid Chromatography Tandem Mass Spectrometry

    PubMed Central

    Brewer, Alex J.; Lunte, Craig

    2015-01-01

    Nucleosides are components of both DNA and RNA, and contain either a ribose (RNA) or 2deoxyribose (DNA) sugar and a purine or pyrimidine base. In addition to DNA and RNA turnover, modified nucleosides found in urine have been correlated to a diminished health status associated with AIDS, cancers, oxidative stress and age. Nucleosides found in municipal wastewater influent are potentially useful markers of community health status, and as of now, remain uninvestigated. A method was developed to quantify nucleosides in municipal wastewater using large-volume injection, liquid chromatography, and mass spectrometry. Method accuracy ranged from 92 to 139% when quantified by using isotopically labeled internal standards. Precision ranged from 6.1 to 19% of the relative standard deviation. The method’s utility was demonstrated by the analysis of twenty-four hour composite wastewater influent samples that were collected over a week to investigate community nucleoside excretion. Nucleosides originating from RNA were more abundant that DNA over the study period, with total loads of nucleosides ranging from 2 to 25 kg/day. Given this relatively high amount of nucleosides found over the study period they present an attractive analyte for the investigation of community health. PMID:26322136

  20. Development of a large mosaic volume phase holographic (VPH) grating for APOGEE

    NASA Astrophysics Data System (ADS)

    Arns, James; Wilson, John C.; Skrutskie, Mike; Smee, Steve; Barkhouser, Robert; Eisenstein, Daniel; Gunn, Jim; Hearty, Fred; Harding, Al; Maseman, Paul; Holtzman, Jon; Schiavon, Ricardo; Gillespie, Bruce; Majewski, Steven

    2010-07-01

    Volume phase holographic (VPH) gratings are increasingly being used as diffractive elements in astronomical instruments due to their potential for very high peak diffraction efficiencies and the possibility of a compact instrument design when the gratings are used in transmission. Historically, VPH grating (VPHG) sizes have been limited by the size of manufacturer's holographic recording optics. We report on the design, specification and fabrication of a large, 290 mm × 475 mm elliptically-shaped, mosaic VPHG for the Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectrograph. This high-resolution near-infrared multi-object spectrograph is in construction for the Sloan Digital Sky Survey III (SDSS III). The 1008.6 lines/mm VPHG was designed for optimized performance over a wavelength range from 1.5 to 1.7 μm. A step-and-repeat exposure method was chosen to fabricate a three-segment mosaic on a 305 mm × 508 mm monolithic fused-silica substrate. Specification considerations imposed on the VPHG to assure the mosaic construction will satisfy the end use requirements are discussed. Production issues and test results of the mosaic VPHG are discussed.

  1. Development testing of large volume water sprays for warm fog dispersal

    NASA Technical Reports Server (NTRS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.; Beard, K. V.

    1986-01-01

    A new brute-force method of warm fog dispersal is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray induced air flow. Fog droplets are removed by coalescence/rainout. The efficiency of the technique depends upon the drop size spectra in the spray, the height to which the spray can be projected, the efficiency with which fog laden air is processed through the curtain of spray, and the rate at which new fog may be formed due to temperature differences between the air and spray water. Results of a field test program, implemented to develop the data base necessary to assess the proposed method, are presented. Analytical calculations based upon the field test results indicate that this proposed method of warm fog dispersal is feasible. Even more convincingly, the technique was successfully demonstrated in the one natural fog event which occurred during the test program. Energy requirements for this technique are an order of magnitude less than those to operate a thermokinetic system. An important side benefit is the considerable emergency fire extinguishing capability it provides along the runway.

  2. A study on high strength concrete prepared with large volumes of low calcium fly ash

    SciTech Connect

    Poon, C.S.; Lam, L.; Wong, Y.L.

    2000-03-01

    This paper presents the results of a laboratory study on high strength concrete prepared with large volumes of low calcium fly ash. The parameters studied included compressive strength, heat of hydration, chloride diffusivity, degree of hydration, and pore structures of fly ash/cement concrete and corresponding pastes. The experimental results showed that concrete with a 28-day compressive strength of 80 MPA could be obtained with a water-to-binder (w/b) ratio of 0.24, with a fly ash content of 45%. Such concrete has lower heat of hydration and chloride diffusivity than the equivalent plain cement concrete or concrete prepared with lower fly ash contents. The test results showed that at lower w/b ratios, the contribution to strength by the fly ash was higher than in the mixes prepared with higher w/b ratios. The study also quantified the reaction rates of cement and fly ash in the cementitious materials. The results demonstrated the dual effects of fly ash in concrete: (1) act as a micro-aggregate and (2) being a pozzolana. It was also noted that the strength contribution of fly ash in concrete was better than in the equivalent cement/fly ash pastes suggesting the fly ash had improved the interfacial bond between the past and the aggregates in the concrete. Such an improvement was also reflected in the results of the mercury intrusion porosimetry (MIP) test.

  3. Plasma response to electron energy filter in large volume plasma device

    SciTech Connect

    Sanyasi, A. K.; Awasthi, L. M.; Mattoo, S. K.; Srivastava, P. K.; Singh, S. K.; Singh, R.; Kaw, P. K.

    2013-12-15

    An electron energy filter (EEF) is embedded in the Large Volume Plasma Device plasma for carrying out studies on excitation of plasma turbulence by a gradient in electron temperature (ETG) described in the paper of Mattoo et al. [S. K. Mattoo et al., Phys. Rev. Lett. 108, 255007 (2012)]. In this paper, we report results on the response of the plasma to the EEF. It is shown that inhomogeneity in the magnetic field of the EEF switches on several physical phenomena resulting in plasma regions with different characteristics, including a plasma region free from energetic electrons, suitable for the study of ETG turbulence. Specifically, we report that localized structures of plasma density, potential, electron temperature, and plasma turbulence are excited in the EEF plasma. It is shown that structures of electron temperature and potential are created due to energy dependence of the electron transport in the filter region. On the other hand, although structure of plasma density has origin in the particle transport but two distinct steps of the density structure emerge from dominance of collisionality in the source-EEF region and of the Bohm diffusion in the EEF-target region. It is argued and experimental evidence is provided for existence of drift like flute Rayleigh-Taylor in the EEF plasma.

  4. Spectroscopic properties of large-volume virtual Frisch-grid CdMnTe detectors

    NASA Astrophysics Data System (ADS)

    Kim, K. H.; Park, Chansun; Kim, Pilsu; Cho, Shinhaeng; Lee, Jinseo; Hong, T. K.; Hossain, A.; Bolotnikov, A. E.; James, R. B.

    2015-06-01

    CdMnTe(CMT) is a promising alternative material for use as a room-temperature radiation detector. Frisch-grid detectors have a simple configuration and outstanding spectral performance compared with other single-carrier collection techniques. The energy resolution of large-volume virtual Frisch-grid CMT detectors was tested by using several isotopes such as 57Co, 22 Na, 133Ba, and 137Cs together or separately. Energy resolutions of 6.7% and 2.1% were obtained for 122-keV 57Co and 662-keV 137Cs gamma rays, respectively, without using any additional signal processing techniques. Also, a 12-mm-thick CMT detector detected the 511-keV and 1.277-MeV gamma peaks of 22Na with values of the full width at half maximum (FWHM) of 2.7% and 1.5%, respectively. In addition, multiple low- and high-energy gamma peaks of 133Ba were well separated. The mobilitylifetime product calculated from the shift of the 662-keV photo-peak vs. bias by using Hecht's equation was 7 × 10 -3 cm2/V. These results show the possibility of using CMT detectors in response to various requirements for gamma-ray detection at room-temperature.

  5. Configuration Analysis of the ERS Points in Large-Volume Metrology System

    PubMed Central

    Jin, Zhangjun; Yu, Cijun; Li, Jiangxiong; Ke, Yinglin

    2015-01-01

    In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers’ coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system. PMID:26402685

  6. A uniform laminar air plasma plume with large volume excited by an alternating current voltage

    NASA Astrophysics Data System (ADS)

    Li, Xuechen; Bao, Wenting; Chu, Jingdi; Zhang, Panpan; Jia, Pengying

    2015-12-01

    Using a plasma jet composed of two needle electrodes, a laminar plasma plume with large volume is generated in air through an alternating current voltage excitation. Based on high-speed photography, a train of filaments is observed to propagate periodically away from their birth place along the gas flow. The laminar plume is in fact a temporal superposition of the arched filament train. The filament consists of a negative glow near the real time cathode, a positive column near the real time anode, and a Faraday dark space between them. It has been found that the propagation velocity of the filament increases with increasing the gas flow rate. Furthermore, the filament lifetime tends to follow a normal distribution (Gaussian distribution). The most probable lifetime decreases with increasing the gas flow rate or decreasing the averaged peak voltage. Results also indicate that the real time peak current decreases and the real time peak voltage increases with the propagation of the filament along the gas flow. The voltage-current curve indicates that, in every discharge cycle, the filament evolves from a Townsend discharge to a glow one and then the discharge quenches. Characteristic regions including a negative glow, a Faraday dark space, and a positive column can be discerned from the discharge filament. Furthermore, the plasma parameters such as the electron density, the vibrational temperature and the gas temperature are investigated based on the optical spectrum emitted from the laminar plume.

  7. Large volume flow electroporation of mRNA: clinical scale process.

    PubMed

    Li, Linhong; Allen, Cornell; Shivakumar, Rama; Peshwa, Madhusudan V

    2013-01-01

    Genetic modification for enhancing cellular function has been continuously pursued for fighting diseases. Messenger RNA (mRNA) transfection is found to be a promising solution in modifying hematopoietic and immune cells for therapeutic purpose. We have developed a flow electroporation-based system for large volume electroporation of cells with various molecules, including mRNA. This allows robust and scalable mRNA transfection of primary cells of different origin. Here we describe transfection of chimeric antigen receptor (CAR) mRNA into NK cells to modulate the ability of NK cells to target tumor cells. High levels of CAR expression in NK cells can be maintained for 3-7 days post transfection. CD19-specific CAR mRNA transfected NK cells demonstrate targeted lysis of CD19-expressing tumor cells OP-1, primary B-CLL tumor cells, and autologous CD19+ B cells in in vitro assays with enhanced potency: >80% lysis at effector-target ratio of 1:1. This allows current good manufacturing practices (cGMP) and regulatory compliant manufacture of CAR mRNA transfected NK cells for clinical delivery. PMID:23296932

  8. Configuration Analysis of the ERS Points in Large-Volume Metrology System.

    PubMed

    Jin, Zhangjun; Yu, Cijun; Li, Jiangxiong; Ke, Yinglin

    2015-01-01

    In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers' coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system. PMID:26402685

  9. Research of Making Large Volume Atmospheric Pressure Plasma by Parallel MCS Discharge

    NASA Astrophysics Data System (ADS)

    Nagano, Kazumi; Kon, Akira; Yamazaki, Yuki; Maeyama, Mitsuaki

    We research parallel microhollow cathode sustained (MCS) discharge plasma that is generated by parallel operations of the Microhollow cathode discharge (MHCD) plasma to produce a large volume atmospheric pressure plasma. We propose the cylindrical parallel MCS discharge plasma expecting electron supply by MHCD plasma and electron trapping effects of logarithm potential. Several MHCD electrodes are placed on cylindrical surface of 19 mm in radius and a thin wire is placed at a cylinder center axis. MHCD electrodes are supplied repetitive pulse voltage and the central wire anode is supplied DC voltage. So far, 8 parallel MCS discharge plasmas could be generated at 50 kPa. In this paper, the relationship between axial distance of MHCD electrodes and number of parallel discharge electrodes, and the condition to increase power supplied to MCS discharge were studied. Axial distance of MHCD electrodes were arranged to 6 mm and 16 parallel cylindrical MCS discharge in atmospheric pressure was generated. Power supplied to MCS discharge could be increased without decreasing number of parallel discharge electrodes by reducing current limiting resistor and shortening MHCD pulse width.

  10. Large eddy simulations of turbulent flows on graphics processing units: Application to film-cooling flows

    NASA Astrophysics Data System (ADS)

    Shinn, Aaron F.

    Computational Fluid Dynamics (CFD) simulations can be very computationally expensive, especially for Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) of turbulent ows. In LES the large, energy containing eddies are resolved by the computational mesh, but the smaller (sub-grid) scales are modeled. In DNS, all scales of turbulence are resolved, including the smallest dissipative (Kolmogorov) scales. Clusters of CPUs have been the standard approach for such simulations, but an emerging approach is the use of Graphics Processing Units (GPUs), which deliver impressive computing performance compared to CPUs. Recently there has been great interest in the scientific computing community to use GPUs for general-purpose computation (such as the numerical solution of PDEs) rather than graphics rendering. To explore the use of GPUs for CFD simulations, an incompressible Navier-Stokes solver was developed for a GPU. This solver is capable of simulating unsteady laminar flows or performing a LES or DNS of turbulent ows. The Navier-Stokes equations are solved via a fractional-step method and are spatially discretized using the finite volume method on a Cartesian mesh. An immersed boundary method based on a ghost cell treatment was developed to handle flow past complex geometries. The implementation of these numerical methods had to suit the architecture of the GPU, which is designed for massive multithreading. The details of this implementation will be described, along with strategies for performance optimization. Validation of the GPU-based solver was performed for fundamental bench-mark problems, and a performance assessment indicated that the solver was over an order-of-magnitude faster compared to a CPU. The GPU-based Navier-Stokes solver was used to study film-cooling flows via Large Eddy Simulation. In modern gas turbine engines, the film-cooling method is used to protect turbine blades from hot combustion gases. Therefore, understanding the physics of

  11. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    PubMed Central

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-01-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications. PMID:26883390

  12. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    NASA Astrophysics Data System (ADS)

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-02-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications.

  13. Substructure synthesis method for simulating large molecular complexes

    PubMed Central

    Ming, Dengming; Kong, Yifei; Wu, Yinghao; Ma, Jianpeng

    2003-01-01

    This paper reports a computational method for describing the conformational flexibility of very large biomolecular complexes using a reduced number of degrees of freedom. It is called the substructure synthesis method, and the basic concept is to treat the motions of a given structure as a collection of those of an assemblage of substructures. The choice of substructures is arbitrary and sometimes quite natural, such as domains, subunits, or even large segments of biomolecular complexes. To start, a group of low-frequency substructure modes is determined, for instance by normal mode analysis, to represent the motions of the substructure. Next, a desired number of substructures are joined together by a set of constraints to enforce geometric compatibility at the interface of adjacent substructures, and the modes for the assembled structure can then be synthesized from the substructure modes by applying the Rayleigh–Ritz principle. Such a procedure is computationally much more desirable than solving the full eigenvalue problem for the whole assembled structure. Furthermore, to show the applicability to biomolecular complexes, the method is used to study F-actin, a large filamentous molecular complex involved in many cellular functions. The results demonstrate that the method is capable of studying the motions of very large molecular complexes that are otherwise completely beyond the reach of any conventional methods. PMID:12518058

  14. Large Eddy Simulation of Air Escape through a Hospital Isolation Room Single Hinged Doorway—Validation by Using Tracer Gases and Simulated Smoke Videos

    PubMed Central

    Saarinen, Pekka E.; Kalliomäki, Petri; Tang, Julian W.; Koskela, Hannu

    2015-01-01

    The use of hospital isolation rooms has increased considerably in recent years due to the worldwide outbreaks of various emerging infectious diseases. However, the passage of staff through isolation room doors is suspected to be a cause of containment failure, especially in case of hinged doors. It is therefore important to minimize inadvertent contaminant airflow leakage across the doorway during such movements. To this end, it is essential to investigate the behavior of such airflows, especially the overall volume of air that can potentially leak across the doorway during door-opening and human passage. Experimental measurements using full-scale mock-ups are expensive and labour intensive. A useful alternative approach is the application of Computational Fluid Dynamics (CFD) modelling using a time-resolved Large Eddy Simulation (LES) method. In this study simulated air flow patterns are qualitatively compared with experimental ones, and the simulated total volume of air that escapes is compared with the experimentally measured volume. It is shown that the LES method is able to reproduce, at room scale, the complex transient airflows generated during door-opening/closing motions and the passage of a human figure through the doorway between two rooms. This was a basic test case that was performed in an isothermal environment without ventilation. However, the advantage of the CFD approach is that the addition of ventilation airflows and a temperature difference between the rooms is, in principle, a relatively simple task. A standard method to observe flow structures is dosing smoke into the flow. In this paper we introduce graphical methods to simulate smoke experiments by LES, making it very easy to compare the CFD simulation to the experiments. The results demonstrate that the transient CFD simulation is a promising tool to compare different isolation room scenarios without the need to construct full-scale experimental models. The CFD model is able to reproduce

  15. Computationally Efficient Modeling and Simulation of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Jain, Jitesh (Inventor); Cauley, Stephen F (Inventor); Li, Hong (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Vankataramanan (Inventor)

    2014-01-01

    A system for simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof, including a processor, and a memory, the processor configured to perform obtaining a matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure, the element values for each matrix including inductance L and inverse capacitance P, obtaining an adjacency matrix A associated with the interconnect structure, storing the matrices X, Y, and A in the memory, and performing numerical integration to solve first and second equations.

  16. “Finite” non-Gaussianities and tensor-scalar ratio in large volume Swiss-cheese compactifications

    NASA Astrophysics Data System (ADS)

    Misra, Aalok; Shukla, Pramod

    2009-03-01

    Developing on the ideas of (Section 4 of) [A. Misra, P. Shukla, Moduli stabilization, large-volume dS minimum without anti-D3-branes, (non-)supersymmetric black hole attractors and two-parameter Swiss cheese Calabi-Yau's, Nucl. Phys. B 799 (2008) 165-198, arXiv: 0707.0105] and [A. Misra, P. Shukla, Large volume axionic Swiss-cheese inflation, Nucl. Phys. B 800 (2008) 384-400, arXiv: 0712.1260 [hep-th

  17. Minimum-dissipation models for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Bae, Hyunji Jane; Rozema, Wybe; Moin, Parviz; Verstappen, Roel

    2015-11-01

    Minimum-dissipation eddy-viscosity models are a class of subgrid scale models for LES that give the minimum eddy dissipation required to dissipate the energy of subgrid scales. The QR minimum-dissipation model [Verstappen, J. Sci. Comp., 2011] gives good results in simulations of decaying grid turbulence carried out on an isotropic grid. In particular, due to the minimum dissipation property of the model, the predicted energy spectra are in very good agreement with the DNS results up to the cut-off wave number unlike other methods. However, its results on anisotropic grids are often unsatisfactory because the model does not properly incorporate the grid anisotropy. We propose the anisotropic minimum-dissipation (AMD) model [Rozema et al., submitted for publication, 2015], a minimum-dissipation model that generalizes the QR model to anisotropic grids. The AMD model is more cost effective than the dynamic Smagorinsky model, appropriately switches off in laminar and transitional flow on anisotropic grids, and its subgrid scale model is consistent with the theoretic subgrid tensor. Experiments show that the AMD model is as accurate as the dynamic Smagorinsky model and Vreman model in simulations of isotropic turbulence, temporal mixing layer, and turbulent channel flow. H. J. Bae acknowledges support from SGF. W. Rozema and R. Verstappen acknowledge sponsoring by NWO for the use of supercomputing facilities and the financial support to attend the CTR SP 2014.

  18. The Jefferson Project: Large-eddy simulations of a watershed

    NASA Astrophysics Data System (ADS)

    Watson, C.; Cipriani, J.; Praino, A. P.; Treinish, L. A.; Tewari, M.; Kolar, H.

    2015-12-01

    The Jefferson Project is a new endeavor at Lake George, NY by IBM Research, Rensselaer Polytechnic Institute (RPI) and The Fund for Lake George. Lake George is an oligotrophic lake - one of low nutrients - and a 30-year study recently published by RPI's Darrin Fresh Water Institute highlighted the renowned water quality is declining from the injection of salt (from runoff), algae, and invasive species. In response, the Jefferson Project is developing a system to provide extensive data on relevant physical, chemical and biological parameters that drive ecosystem function. The system will be capable of real-time observations and interactive modeling of the atmosphere, watershed hydrology, lake circulation and food web dynamics. In this presentation, we describe the development of the operational forecast system used to simulate the atmosphere in the model stack, Deep ThunderTM (a configuration of the ARW-WRF model). The model performs 48-hr forecasts twice daily in a nested configuration, and in this study we present results from ongoing tests where the innermost domains are dx = 333-m and 111-m. We discuss the model's ability to simulate boundary layer processes, lake surface conditions (an input into the lake model), and precipitation (an input into the hydrology model) during different weather regimes, and the challenges of data assimilation and validation at this scale. We also explore the potential for additional nests over select regions of the watershed to better capture turbulent boundary layer motions.

  19. Large-scale molecular dynamics simulations of Al(111) nanoscratching

    NASA Astrophysics Data System (ADS)

    Jun, Sukky; Lee, Youngmin; Youb Kim, Sung; Im, Seyoung

    2004-09-01

    Molecular dynamics simulations of nanoscratching are performed with emphasis on the correlation between the scratching conditions and the defect mechanism in the substrate. More than six million atoms are described by the embedded atom method (EAM) potential. The scratching process is simulated by high-speed ploughing on the Al(111) surface with an atomic force microscope (AFM) tip that is geometrically modelled to be of a smoothed conical shape. A repulsive model potential is employed to represent the interaction between the AFM tip and the Al atoms. Through the visualization technique of atomic coordination number, dislocations and vacancies are identified as the two major defect types prevailing under nanoscratching. Their structures and movements are investigated for understanding the mechanisms of defect generation and evolution under various scratching conditions. The glide patterns of Shockley partial dislocation loops are obviously dependent upon the scratching directions in conjunction with the slip system of face-centred cubic (fcc) single crystals. It is shown that the shape of the AFM tip directly influences the facet formation on the scratched groove. The penetration depth into the substrate during scratching is further verified to affect both surface pile-up and residual defect generations that are important in assessing the change of material properties after scratching.

  20. Scalable quantum mechanical simulation of large polymer systems

    SciTech Connect

    Goedecker, S.; Hoisie, A.; Kress, J.; Lubeck, O.; Wasserman, H.

    1997-08-01

    We describe a program for quantum mechanical calculations of very large hydrocarbon polymer systems. It is based on a new algorithmic approach to the quantum mechanical tight binding equations that naturally leads to a very efficient parallel implementation and that scales linearly with respect to the number of atoms. We get both very high single node performance as well as a significant parallel speedup on the SGI Origin 2000 parallel computer.

  1. Tracking reactive pollutants in large groundwater systems by particle-based simulations

    NASA Astrophysics Data System (ADS)

    Kalbacher, T.; Sun, Y.; He, W.; Jang, E.; Delfs, J.; Shao, H.; Park, C.; Kolditz, O.

    2013-12-01

    Worldwide, great amounts of human and financial resources are being invested to protect and secure clean water resources. Especially in arid and semi-arid regions civilization depends on the availability of freshwater from the underlying aquifer systems where water quality and quantity are often dramatically deteriorating. Main reasons for the mitigation of water quality are extensive fertilizer use in agriculture and waste water from cities and various industries. It may be assumed that climate and demographic changes will add further stress to this situation in the future. One way to assess water quality is to model the coupled groundwater and chemical system, e.g.to assess the impact of possible contaminant precipitation, absorption and migration in subsurface media. Currently, simulating such scenarios at large scales is a challenging task due to the extreme computational load, numerical stability issues, scale-dependencies and spatially and temporally infrequently distributed or missing data, which can lead e.g. to in appropriate model simplifications and additionally uncertainties in the results. The simulation of advective-dispersive mass transport is usually solved by standard finite differences, finite element or finite volume methods. Particle tracking is an alternative method and commonly used e.g. to delineate contaminant travel times, with the advantage of being numerically more stable and computational less expensive. Since particle tracking is used to evaluate groundwater residence times, it seems natural and straightforward to include reactive processes to track geochemical changes as well. The main focus of the study is the evaluation of reactive transport processes at large scales. Therefore, a number of new methods have been developed and implemented into the OpenGeoSys project, which is a scientific, FEM-based, open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media (www

  2. Large-eddy simulation of combustion systems with convective heat-loss

    NASA Astrophysics Data System (ADS)

    Shunn, Lee

    Computer simulations have the potential to viably address the design challenges of modern combustion applications, provided that adequate models for the prediction of multiphysics processes can be developed. Heat transfer has particular significance in modeling because it directly affects thermal efficiencies and pollutant formation in combustion systems. Convective heat transfer from flame-wall interaction has received increased attention in aeronautical propulsion and power-generation applications where modern designs have trended towards more compact combustors with higher surface-to-volume ratios, and in diesel engines where enclosed volumes and cool walls provide ample conditions for thermal quenching. As intense flame-wall interactions can induce extremely large heat fluxes, their inclusion is important in computational models used to predict performance and design cooling systems. In the present work, a flamelet method is proposed for modeling turbulence/chemistry interactions in large-eddy simulations (LES) of non-premixed combustion systems with convective heat-losses. The new method is based on the flamelet/progress variable approach of Pierce & Moin (J. Fluid Mech. 2004, 504:73-97) and extends that work to include the effects of thermal-losses on the combustion chemistry. In the new model, chemical-state databases are constructed by solving one-dimensional diffusion/reaction equations which have been constrained by scaling the enthalpy of the system between the adiabatic state and a thermally-quenched reference state. The solutions are parameterized and tabulated as a function of the mapping variables: mixture fraction, reaction progress variable, and normalized enthalpy. The new model is applied to LES of non-premixed methane-air combustion in a coaxial-jet with isothermal wall-conditions to describe heat transfer to the confinement. The resulting velocity, species concentration, and temperature fields are compared to experimental measurements and to

  3. Patient-specific coronary artery blood flow simulation using myocardial volume partitioning

    NASA Astrophysics Data System (ADS)

    Kim, Kyung Hwan; Kang, Dongwoo; Kang, Nahyup; Kim, Ji-Yeon; Lee, Hyong-Euk; Kim, James D. K.

    2013-03-01

    Using computational simulation, we can analyze cardiovascular disease in non-invasive and quantitative manners. More specifically, computational modeling and simulation technology has enabled us to analyze functional aspect such as blood flow, as well as anatomical aspect such as stenosis, from medical images without invasive measurements. Note that the simplest ways to perform blood flow simulation is to apply patient-specific coronary anatomy with other average-valued properties; in this case, however, such conditions cannot fully reflect accurate physiological properties of patients. To resolve this limitation, we present a new patient-specific coronary blood flow simulation method by myocardial volume partitioning considering artery/myocardium structural correspondence. We focus on that blood supply is closely related to the mass of each myocardial segment corresponding to the artery. Therefore, we applied this concept for setting-up simulation conditions in the way to consider many patient-specific features as possible from medical image: First, we segmented coronary arteries and myocardium separately from cardiac CT; then the myocardium is partitioned into multiple regions based on coronary vasculature. The myocardial mass and required blood mass for each artery are estimated by converting myocardial volume fraction. Finally, the required blood mass is used as boundary conditions for each artery outlet, with given average aortic blood flow rate and pressure. To show effectiveness of the proposed method, fractional flow reserve (FFR) by simulation using CT image has been compared with invasive FFR measurement of real patient data, and as a result, 77% of accuracy has been obtained.

  4. Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The principal objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. A summary of work accomplished during the last six months is presented.

  5. Large-eddy simulation of supercritical fluid flow and combustion

    NASA Astrophysics Data System (ADS)

    Huo, Hongfa

    The present study focuses on the modeling and simulation of injection, mixing, and combustion of real fluids at supercritical conditions. The objectives of the study are: (1) to establish a unified theoretical framework that can be used to study the turbulent combustion of real fluids; (2) to implement the theoretical framework and conduct numerical studies with the aim of improving the understanding of the flow and combustion dynamics at conditions representative of contemporary liquid-propellant rocket engine operation; (3) to identify the key design parameters and the flow variables which dictate the dynamics characteristics of swirl- and shear- coaxial injectors. The theoretical and numerical framework is validated by simulating the Sandia Flame D. The calculated axial and radial profiles of velocity, temperature, and mass fractions of major species are in reasonably good agreement with the experimental measurements. The conditionally averaged mass fraction profiles agree very well with the experimental results at different axial locations. The validated model is first employed to examine the flow dynamics of liquid oxygen in a pressure swirl injector at supercritical conditions. Emphasis is placed on analyzing the effects of external excitations on the dynamic response of the injector. The high-frequency fluctuations do not significantly affect the flow field as they are dissipated shortly after being introduced into the flow. However, the lower-frequency fluctuations are amplified by the flow. As a result, the film thickness and the spreading angle at the nozzle exit fluctuate strongly for low-frequency external excitations. The combustion of gaseous oxygen/gaseous hydrogen in a high-pressure combustion chamber for a shear coaxial injector is simulated to assess the accuracy and the credibility of the computer program when applied to a sub-scale model of a combustor. The predicted heat flux profile is compared with the experimental and numerical studies. The

  6. Computationally efficient modeling and simulation of large scale systems

    NASA Technical Reports Server (NTRS)

    Jain, Jitesh (Inventor); Cauley, Stephen F. (Inventor); Li, Hong (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Venkataramanan (Inventor)

    2012-01-01

    A method of simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof. A matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure are obtained where the element values for each matrix include inductance L and inverse capacitance P. An adjacency matrix A associated with the interconnect structure is obtained. Numerical integration is used to solve first and second equations, each including as a factor the product of the inverse matrix X.sup.-1 and at least one other matrix, with first equation including X.sup.-1Y, X.sup.-1A, and X.sup.-1P, and the second equation including X.sup.-1A and X.sup.-1P.

  7. Computationally efficient modeling and simulation of large scale systems

    NASA Technical Reports Server (NTRS)

    Jain, Jitesh (Inventor); Cauley, Stephen F. (Inventor); Li, Hong (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Venkataramanan (Inventor)

    2010-01-01

    A method of simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof. A matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure are obtained where the element values for each matrix include inductance L and inverse capacitance P. An adjacency matrix A associated with the interconnect structure is obtained. Numerical integration is used to solve first and second equations, each including as a factor the product of the inverse matrix X.sup.1 and at least one other matrix, with first equation including X.sup.1Y, X.sup.1A, and X.sup.1P, and the second equation including X.sup.1A and X.sup.1P.

  8. Pyrometry in the Multianvil Press: New approach for temperature measurement in large volume press experiments

    NASA Astrophysics Data System (ADS)

    Sanehira, T.; Wang, Y.; Prakapenka, V.; Rivers, M. L.

    2008-12-01

    Temperature measurement in large volume press experiments has been based on thermocouple emf, which has well known problems: unknown pressure dependence of emf [e.g., 1], chemical reaction between thermocouple and other materials, deformation related texture development in the thermocouple wires [2], and so on. Thus, different techniques to measure temperatures in large volume press experiments other than thermocouples are required to measure accurate temperatures under high pressures. Here we report a new development using pyrometry in the multianvil press, where temperatures are derived on the basis of spectral radiometry. Several high pressure runs were conducted using the 1000 ton press with a DIA module installed at 13 ID-D GSECARS beamline at Advanced Photon Source (APS) [3]. The cubic pressure medium, 14 mm edge length, was made of soft-fired pyrophyllite with a graphite furnace. A moissanite (SiC) single crystal was built inside the pressure medium as a window for the thermal emission signal to go through. An MgO disk with 1.0 mm thickness was inserted in a gap between the top of the SiC crystal and thermocouple hot junction. The bottom of the window crystal was in direct contact with the tip of the anvil, which had a 1.5 mm diameter hole drilled all the way through the anvil axis. An optical fiber was inserted in this hole and the open end of fiber was in contact with the SiC crystal. Thermal spectral radiance from the inner cell assembly was obtained via the fiber and recorded by an Ocean Optics HP2000 spectrometer. The system response of spectrometer was calibrated by a tungsten ribbon ramp (OL550S, Optronic Laboratories, Inc.) with standard of spectral radiance. The cell assembly was compressed up to target value of 15 tons and then temperature was increased up to 1573 K. Radiation spectra were mainly obtained above 873 K and typical integration time was 1 ms or 10 ms. Data collection was done in the process of increase and decrease of temperature. In

  9. Hepatic Arterial Embolization and Chemoembolization in the Management of Patients with Large-Volume Liver Metastases

    SciTech Connect

    Kamat, Paresh P.; Gupta, Sanjay Ensor, Joe E.; Murthy, Ravi; Ahrar, Kamran; Madoff, David C.; Wallace, Michael J.; Hicks, Marshall E.

    2008-03-15

    The purpose of this study was to assess the role of hepatic arterial embolization (HAE) and chemoembolization (HACE) in patients with large-volume liver metastases. Patients with metastatic neuroendocrine tumors, melanomas, or gastrointestinal stromal tumors (GISTs) with >75% liver involvement who underwent HAE or HACE were included in the study. Radiologic response, progression-free survival (PFS), overall survival (OS), and postprocedure complications were assessed. Sixty patients underwent 123 treatment sessions. Of the 48 patients for whom follow-up imaging was available, partial response was seen in 12 (25%) patients, minimal response in 6 (12%), stable disease in 22 (46%), and progressive disease in 8 (17%). Median OS and PFS were 9.3 and 4.9 months, respectively. Treatment resulted in radiologic response or disease stabilization in 82% and symptomatic response in 65% of patients with neuroendocrine tumors. Patients with neuroendocrine tumors had higher response rates (44% vs. 27% and 0%; p = 0.31) and longer PFS (9.2 vs. 2.0 and 2.3 months; p < 0.0001) and OS (17.9 vs. 2.4 and 2.3 months; p < 0.0001) compared to patients with melanomas and GISTs. Major complications occurred in 21 patients after 23 (19%) of the 123 sessions. Nine of the 12 patients who developed major complications resulting in death had additional risk factors-carcinoid heart disease, sepsis, rapidly worsening performance status, or anasarca. In conclusion, in patients with neuroendocrine tumors with >75% liver involvement, HAE/HACE resulted in symptom palliation and radiologic response or disease stabilization in the majority of patients. Patients with hepatic metastases from melanomas and GISTs, however, did not show any appreciable benefit from this procedure. Patients with massive liver tumor burden, who have additional risk factors, should not be subjected to HAE/HACE because of the high risk of procedure-related mortality.

  10. A pomegranate-inspired nanoscale design for large-volume-change lithium battery anodes

    NASA Astrophysics Data System (ADS)

    Liu, Nian; Lu, Zhenda; Zhao, Jie; McDowell, Matthew T.; Lee, Hyun-Wook; Zhao, Wenting; Cui, Yi

    2014-03-01

    Silicon is an attractive material for anodes in energy storage devices, because it has ten times the theoretical capacity of its state-of-the-art carbonaceous counterpart. Silicon anodes can be used both in traditional lithium-ion batteries and in more recent Li-O2 and Li-S batteries as a replacement for the dendrite-forming lithium metal anodes. The main challenges associated with silicon anodes are structural degradation and instability of the solid-electrolyte interphase caused by the large volume change (~300%) during cycling, the occurrence of side reactions with the electrolyte, and the low volumetric capacity when the material size is reduced to a nanometre scale. Here, we propose a hierarchical structured silicon anode that tackles all three of these problems. Our design is inspired by the structure of a pomegranate, where single silicon nanoparticles are encapsulated by a conductive carbon layer that leaves enough room for expansion and contraction following lithiation and delithiation. An ensemble of these hybrid nanoparticles is then encapsulated by a thicker carbon layer in micrometre-size pouches to act as an electrolyte barrier. As a result of this hierarchical arrangement, the solid-electrolyte interphase remains stable and spatially confined, resulting in superior cyclability (97% capacity retention after 1,000 cycles). In addition, the microstructures lower the electrode-electrolyte contact area, resulting in high Coulombic efficiency (99.87%) and volumetric capacity (1,270 mAh cm-3), and the cycling remains stable even when the areal capacity is increased to the level of commercial lithium-ion batteries (3.7 mAh cm-2).

  11. Delayed Difference Scheme for Large Scale Scientific Simulations

    NASA Astrophysics Data System (ADS)

    Mudigere, Dheevatsa; Sherlekar, Sunil D.; Ansumali, Santosh

    2014-11-01

    We argue that the current heterogeneous computing environment mimics a complex nonlinear system which needs to borrow the concept of time-scale separation and the delayed difference approach from statistical mechanics and nonlinear dynamics. We show that by replacing the usual difference equations approach by a delayed difference equations approach, the sequential fraction of many scientific computing algorithms can be substantially reduced. We also provide a comprehensive theoretical analysis to establish that the error and stability of our scheme is of the same order as existing schemes for a large, well-characterized class of problems.

  12. A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    1998-01-01

    An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.

  13. Toroidal transducer with two large focal zones for increasing the coagulated volume

    NASA Astrophysics Data System (ADS)

    Vincenot, J.; Melodelima, D.; Kocot, A.; Chavrier, F.; Chapelon, J. Y.

    2012-11-01

    Toroidal HIFU transducers have been shown to generate large conical ablations (7 cm3 in 40 seconds). The focal zone is composed of a first ring-shaped focal zone and an overlap of ultrasound beams behind this first focus. A HIFU device has been developed on this principle to treat liver metastases during an open procedure. Although these large lesions contribute to reduce treatment time, it is still needed to juxtapose 4 to 9 single HIFU lesions to treat liver metastasis (2 cm in diameter) with safety margins. In this work, a different toroidal geometry was used. With this transducer, the overlap area is located between the probe and the focal ring. The objective was to use this transducer with electronic focusing in order to create a spherical shape lesion with sufficient volume for the destruction of a metastasis of 2 cm in diameter without any mechanical displacement. The operating frequency of the toroidal transducer was 2.5 MHz. The radius of curvature was 70 mm with a diameter of 67 mm. The focal ring had a radius of 15 mm. The overlap zone extent between 35 to 55 mm from the emitting surface. An ultrasound-imaging probe (working at 7.5 MHz) was placed in a central circular opening of 26 mm in the HIFU transducer and was aligned with the focal plane. The transducer was divided into 32 rings of 78 mm2. Using a 32 channels amplifier with a phase resolution of 1.4 degrees, it was possible to change the diameter (0 to 15 mm) and depth (45 to 85 mm) of the focus circle to maximize dimensions of the lesion. Tests were conducted in vitro, in bovine liver samples. This toroidal geometry and the use of electronic beam steering allow the creation of roughly spherical lesions (diameter of 47 mm, depth of 35 mm). This treatment was obtained in 6 minutes and 10 seconds without any mechanical displacement of the transducer. The lesions obtained were homogeneous and no untreated area was observed. In conclusion, these results indicate that the treatment of a liver

  14. Large eddy simulation and direct numerical simulation of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Adumitroaie, V.; Frankel, S. H.; Madnia, C. K.; Givi, P.

    1993-01-01

    The objective of this research is to make use of Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first phase of this research conducted within the past three years have been directed in several issues pertaining to intricate physics of turbulent reacting flows. In our previous 5 semi-annual reports submitted to NASA LaRC, as well as several technical papers in archival journals, the results of our investigations have been fully described. In this progress report which is different in format as compared to our previous documents, we focus only on the issue of LES. The reason for doing so is that LES is the primary issue of interest to our Technical Monitor and that our other findings were needed to support the activities conducted under this prime issue. The outcomes of our related investigations, nevertheless, are included in the appendices accompanying this report. The relevance of the materials in these appendices are, therefore, discussed only briefly within the body of the report. Here, results are presented of a priori and a posterior analyses for validity assessments of assumed Probability Density Function (PDF) methods as potential subgrid scale (SGS) closures for LES of turbulent reacting flows. Simple non-premixed reacting systems involving an isothermal reaction of the type A + B yields Products under both chemical equilibrium and non-equilibrium conditions are considered. A priori analyses are conducted of a homogeneous box flow, and a spatially developing planar mixing layer to investigate the performance of the Pearson Family of PDF's as SGS models. A posteriori analyses are conducted of the mixing layer using a hybrid one-equation Smagorinsky/PDF SGS closure. The Smagorinsky closure augmented by the solution of the subgrid turbulent kinetic energy (TKE) equation is employed to account for hydrodynamic fluctuations, and the PDF is employed for modeling the

  15. Endotracheal cuff pressure and tracheal mucosal blood flow: endoscopic study of effects of four large volume cuffs.

    PubMed Central

    Seegobin, R D; van Hasselt, G L

    1984-01-01

    Large volume, low pressure endotracheal tube cuffs are claimed to have less deleterious effect on tracheal mucosa than high pressure, low volume cuffs. Low pressure cuffs, however, may easily be overinflated to yield pressures that will exceed capillary perfusion pressure. Various large volume cuffed endotracheal tubes were studied, including Portex Profile, Searle Sensiv, Mallinkrodt Hi-Lo, and Lanz. Tracheal mucosal blood flow in 40 patients undergoing surgery was assessed using an endoscopic photographic technique while varying the cuff inflation pressure. It was found that these cuffs when overpressurised impaired mucosal blood flow. This impairment of tracheal mucosal blood flow is an important factor in tracheal morbidity associated with intubation. Hence it is recommended that a cuff inflation pressure of 30 cm H2O (22 mm Hg) should not be exceeded. Images FIG 2 FIG 3 FIG 4 PMID:6423162

  16. Modelling artificial sea salt emission in large eddy simulations

    PubMed Central

    Maalick, Z.; Korhonen, H.; Kokkola, H.; Kühn, T.; Romakkaniemi, S.

    2014-01-01

    We study the dispersion of sea salt particles from artificially injected sea spray at a cloud-resolving scale. Understanding of how different aerosol processes affect particle dispersion is crucial when designing emission sources for marine cloud brightening. Compared with previous studies, we include for the first time an explicit treatment of aerosol water, which takes into account condensation, evaporation and their effect on ambient temperature. This enables us to capture the negative buoyancy caused by water evaporation from aerosols. Additionally, we use a higher model resolution to capture aerosol loss through coagulation near the source point. We find that, with a seawater flux of 15 kg s−1, the cooling due to evaporation can be as much as 1.4 K, causing a delay in particle dispersion of 10–20 min. This delay enhances particle scavenging by a factor of 1.14 compared with simulations without aerosol water. We further show that both cooling and particle dispersion depend on the model resolution, with a maximum particle scavenging efficiency of 20% within 5 h after emission at maximum resolution of 50 m. Based on these results, we suggest further regional high-resolution studies which model several injection periods over several weeks. PMID:25404679

  17. Modelling artificial sea salt emission in large eddy simulations.

    PubMed

    Maalick, Z; Korhonen, H; Kokkola, H; Kühn, T; Romakkaniemi, S

    2014-12-28

    We study the dispersion of sea salt particles from artificially injected sea spray at a cloud-resolving scale. Understanding of how different aerosol processes affect particle dispersion is crucial when designing emission sources for marine cloud brightening. Compared with previous studies, we include for the first time an explicit treatment of aerosol water, which takes into account condensation, evaporation and their effect on ambient temperature. This enables us to capture the negative buoyancy caused by water evaporation from aerosols. Additionally, we use a higher model resolution to capture aerosol loss through coagulation near the source point. We find that, with a seawater flux of 15 kg s(-1), the cooling due to evaporation can be as much as 1.4 K, causing a delay in particle dispersion of 10-20 min. This delay enhances particle scavenging by a factor of 1.14 compared with simulations without aerosol water. We further show that both cooling and particle dispersion depend on the model resolution, with a maximum particle scavenging efficiency of 20% within 5 h after emission at maximum resolution of 50 m. Based on these results, we suggest further regional high-resolution studies which model several injection periods over several weeks.

  18. Large-eddy simulations of a propelled submarine model

    NASA Astrophysics Data System (ADS)

    Posa, Antonio; Balaras, Elias

    2015-11-01

    The influence of the propeller on the wake as well as the evolution of the turbulent boundary layers over an appended notional submarine geometry (DARPA SUBOFF) is reported. The present approach utilizes a wall-resolved LES, coupled with an immersed boundary formulation, to simulate the flow model scale Reynolds numbers (Re = 1 . 2 e + 06 , based on the free-stream velocity and the length of the body). Cylindrical coordinates are adopted, and the computational grid is composed of 3.5 billion nodes. Our approach has been validated on the appended submarine body in towed conditions (without propeller), by comparisons to wind tunnel experiments in the literature. The comparison with the towed configuration shows profound modifications in the boundary layer over the stern surface, due to flow acceleration, with higher values of turbulent kinetic energy in the inner layer and lower values in the outer layer. This behavior was found tied to a different topology of the coherent structures between propelled and towed cases. The wake is also highly affected, and the momentum deficit displays a non-monotonic evolution downstream. An axial peak of turbulent kinetic energy replaces the bimodal distribution of the stresses in the wake, observed in the towed configuration. Supported by ONR Grant N000141110455, monitored by Dr. Ki-Han Kim.

  19. Large liquid rocket engine transient performance simulation system

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Southwick, R. D.

    1989-01-01

    Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.

  20. Modelling artificial sea salt emission in large eddy simulations.

    PubMed

    Maalick, Z; Korhonen, H; Kokkola, H; Kühn, T; Romakkaniemi, S

    2014-12-28

    We study the dispersion of sea salt particles from artificially injected sea spray at a cloud-resolving scale. Understanding of how different aerosol processes affect particle dispersion is crucial when designing emission sources for marine cloud brightening. Compared with previous studies, we include for the first time an explicit treatment of aerosol water, which takes into account condensation, evaporation and their effect on ambient temperature. This enables us to capture the negative buoyancy caused by water evaporation from aerosols. Additionally, we use a higher model resolution to capture aerosol loss through coagulation near the source point. We find that, with a seawater flux of 15 kg s(-1), the cooling due to evaporation can be as much as 1.4 K, causing a delay in particle dispersion of 10-20 min. This delay enhances particle scavenging by a factor of 1.14 compared with simulations without aerosol water. We further show that both cooling and particle dispersion depend on the model resolution, with a maximum particle scavenging efficiency of 20% within 5 h after emission at maximum resolution of 50 m. Based on these results, we suggest further regional high-resolution studies which model several injection periods over several weeks. PMID:25404679

  1. Neutral Buoyancy Simulator - NB32 - Large Space Structure

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The Hubble Space Telescope (HST) is a cooperative program of the European Space Agency (ESA) and the National Aeronautical and Space Administration (NASA) to operate a long-lived space-based observatory; it was the flagship mission of NASA's Great Observatories program. The HST program began as an astronomical dream in the 1940s. During the 1970s and 1980s, HST was finally designed and built; and it finally became operational in the 1990s. HST was deployed into a low-Earth orbit on April 25, 1990 from the cargo bay of the Space Shuttle Discovery (STS-31). The design of the HST took into consideration its length of service and the necessity of repairs and equipment replacement by making the body modular. In doing so, subsequent shuttle missions could recover the HST, replace faulty or obsolete parts and be re-released. MSFC's Neutral Buoyancy Simulator served as the training facility for shuttle astronauts for Hubble related missions. Shown is astronaut Sharnon Lucid having her life support system being checked prior to entering the NBS to begin training on the space telescope axial scientific instrument changeout.

  2. Numerical simulation of dam-break problem using staggered finite volume method

    NASA Astrophysics Data System (ADS)

    Budiasih, L. K.; Wiryanto, L. H.

    2016-02-01

    A problem in a dam-break is when a wall separating two sides of water is removed. A shock wave occurs and propagates. The behavior of the wave is interesting to be investigated with respect to the water depth and its wave speed. The aim of this research is to model dam-break problem using the non-linear shallow water equations and solve them numerically using staggered finite volume method. The solution is used to simulate the dam-break on a wet bed. Our numerical solution will be compared to the analytical solution of shallow water equations for dam-break problem. The momentum non-conservative finite volume scheme on a staggered grid will give a good agreement for dam-break problem on a wet bed, for depth ratios greater than 0.25.

  3. Large Eddy Simulation of Aircraft Wake Vortices: Atmospheric Turbulence Effects

    NASA Technical Reports Server (NTRS)

    Han, Jongil; Lin, Yuh-Lang; Arya, S. Pal; Kao, C.-T.

    1997-01-01

    Crow instability can develop in most atmospheric turbulence levels, however, the ring vortices may not form in extremely strong turbulence cases due to strong dissipation of the vortices. It appears that strong turbulence tends to accelerate the occurrences of Crow instability. The wavelength of the most unstable mode is estimated to be about 5b(sub 0), which is less than the theoretical value of 8.6b(sub 0) (Crow, 1970) and may be due to limited domain size and highly nonlinear turbulent flow characteristics. Three-dimensional turbulence can decay wake vortices more rapidly. Axial velocity may be developed by vertical distortion of a vortex pair due to Crow instability or large turbulent eddy motion. More experiments with various non-dimensional turbulence levels are necessary to get useful statistics of wake vortex behavior due to turbulence. Need to investigate larger turbulence length scale effects by enlarging domain size or using grid nesting.

  4. Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device

    NASA Astrophysics Data System (ADS)

    Färber, Matthias; Heller, Julika; Handels, Heinz

    2007-03-01

    The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.

  5. Implementation of low communication frequency 3D FFT algorithm for ultra-large-scale micromagnetics simulation

    NASA Astrophysics Data System (ADS)

    Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta

    2016-10-01

    We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.

  6. Left ventricular and myocardial perfusion responses to volume unloading and afterload reduction in a computer simulation.

    PubMed

    Giridharan, Guruprasad A; Ewert, Dan L; Pantalos, George M; Gillars, Kevin J; Litwak, Kenneth N; Gray, Laman A; Koenig, Steven C

    2004-01-01

    Ventricular assist devices (VADs) have been used successfully as a bridge to transplant in heart failure patients by unloading ventricular volume and restoring the circulation. In a few cases, patients have been successfully weaned from these devices after myocardial recovery. To promote myocardial recovery and alleviate the demand for donor organs, we are developing an artificial vasculature device (AVD) that is designed to allow the heart to fill to its normal volume but eject against a lower afterload. Using this approach, the heart ejects its stroke volume (SV) into an AVD anastomosed to the aortic arch, which has been programmed to produce any desired afterload condition defined by an input impedance profile. During diastole, the AVD returns this SV to the aorta, providing counterpulsation. Dynamic computer models of each of the assist devices (AVD, continuous, and pulsatile flow pumps) were developed and coupled to a model of the cardiovascular system. Computer simulations of these assist techniques were conducted to predict physiologic responses. Hemodynamic parameters, ventricular pressure-volume loops, and vascular impedance characteristics were calculated with AVD, continuous VAD, and asynchronous pulsatile VAD support for a range of clinical cardiac conditions (normal, failing, and recovering left ventricle). These simulation results indicate that the AVD may provide better coronary perfusion, as well as lower vascular resistance and elastance seen by the native heart during ejection compared with continuous and pulsatile VAD. Our working hypothesis is that by controlling afterload using the AVD approach, ventricular cannulation can be eliminated, myocardial perfusion improved, myocardial compliance and resistance restored, and effective weaning protocols developed that promote myocardial recovery.

  7. ANALYSIS OF LOW-LEVEL PESTICIDES FROM HIGH-ELEVATION LAKE WATERS BY LARGE VOLUME INJECTION GCMS

    EPA Science Inventory

    This paper describes the method development for the determination of ultra-low level pesticides from high-elevation lake waters by large-volume injection programmable temperature vaporizer (LVI-PTV) GC/MS. This analytical method is developed as a subtask of a larger study, backgr...

  8. Effect of filtration rates on hollow fiber ultrafilter concentration of viruses and protozoans from large volumes of water

    EPA Science Inventory

    Aims: To describe the ability of tangential flow hollow-fiber ultrafiltration to recover viruses from large volumes of water when run either at high filtration rates or lower filtration rates and recover Cryptosporidium parvum at high filtration rates. Methods and Results: Wate...

  9. Case discussion: large volume blood loss and delirium in a patient with subtrochanteric fracture, dementia, and multiple comorbidities.

    PubMed

    Christmas, Colleen; Mears, Simon C; Sieber, Frederick E; Votsis, Julie; Wood, Ronald C; Friedman, Susan M

    2011-09-01

    This case presents a discussion of a 92-year-old man with multiple comorbidities, who presents with a subtrochanteric fracture. His course is complicated by large volume blood loss intraoperatively, requiring intensive care unit (ICU) monitoring postoperatively. His course is also complicated by delirium.

  10. A dynamic mixed subgrid-scale model for large eddy simulation on unstructured grids: application to turbulent pipe flows

    NASA Astrophysics Data System (ADS)

    Lampitella, P.; Colombo, E.; Inzoli, F.

    2014-04-01

    The paper presents a consistent large eddy simulation (LES) framework which is particularly suited for implicitly filtered LES with unstructured finite volume (FV) codes. From the analysis of the subgrid-scale (SGS) stress tensor arising in this new LES formulation, a novel form of scale-similar SGS model is proposed and combined with a classical eddy viscosity term. The constants in the resulting mixed model are then computed trough a new, cheaper, dynamic procedure based on a consistent redefinition of the Germano identity within the new LES framework. The dynamic mixed model is implemented in a commercial, unstructured, finite volume solver and numerical tests are performed on the turbulent pipe flow at Reτ = 320-1142, showing the flexibility and improvements of the approach over classical modeling strategies. Some limitations of the proposed implementation are also highlighted.

  11. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    SciTech Connect

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  12. Large Eddy Simulation of wind turbine wakes: detailed comparisons of two codes focusing on effects of numerics and subgrid modeling

    NASA Astrophysics Data System (ADS)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-01

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  13. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  14. Development of deployable structures for large space platform systems. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Cox, R. L.; Nelson, R. A.

    1983-01-01

    Candidate deployable linear platform system concepts suitable for development to technology readiness by 1986 are reviewed. The systems concepts were based on trades of alternate deployable/retractable structure concepts, integration of utilities, and interface approaches for docking and assembly of payloads and subsystems. The deployable volume studies involved generation of concepts for deployable volumes which could be used as unpressurized or pressurized hangars, habitats and interconnecting tunnels. Concept generation emphasized using flexible materials and deployable truss structure technology.

  15. SIMULATING WAVES IN THE UPPER SOLAR ATMOSPHERE WITH SURYA: A WELL-BALANCED HIGH-ORDER FINITE-VOLUME CODE

    SciTech Connect

    Fuchs, F. G.; McMurry, A. D.; Mishra, S.; Waagan, K. E-mail: a.d.mcmurry@ifi.uio.no E-mail: kwaagan@cscamm.umd.edu

    2011-05-10

    We consider the propagation of waves in a stratified non-isothermal magnetic atmosphere. The situation of interest corresponds to waves in the outer solar (chromosphere and corona) and other stellar atmospheres. The waves are simulated by using a high-resolution, well-balanced finite-volume-based massively parallel code named SURYA. Numerical experiments in both two and three space dimensions involving realistic temperature distributions, driving forces, and magnetic field configurations are described. Diverse phenomena such as mode conversion, wave acceleration at the transition layer, and driving-dependent wave dynamics are observed. We obtain evidence for the presence of coronal Alfven waves in some three-dimensional configurations. Although some of the incident wave energy is transmitted into the corona, a large proportion of it is accumulated in the chromosphere, providing a possible mechanism for chromospheric heating.

  16. Improvements in Monte Carlo Simulation of Large Electron Fields

    SciTech Connect

    Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto; /SLAC

    2007-11-28

    Two Monte Carlo systems, EGSnrc and Geant4, were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results with measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration was matched to 0.1 cm. Depth dose curves generally agreed to 2% in the build-up region, although there is an additional 2-3% experimental uncertainty in this region. Dose profiles matched to 2% at the depth of maximum dose in the central region of the beam, out to the point of the profile where the dose begins to fall rapidly. A 3%/3mm match was obtained outside the central region except for the 6 MeV beam, where dose differences reached 5%. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. The different systems required different source energies, incident beam angles, thicknesses of the exit window and primary foils, and distance between the primary and secondary foil. These results underscore the requirement for an experimental benchmark of electron scatter for beam energies and foils relevant to radiotherapy.

  17. Effect of crowd size on patient volume at a large, multipurpose, indoor stadium.

    PubMed

    De Lorenzo, R A; Gray, B C; Bennett, P C; Lamparella, V J

    1989-01-01

    A prediction of patient volume expected at "mass gatherings" is desirable in order to provide optimal on-site emergency medical care. While several methods of predicting patient loads have been suggested, a reliable technique has not been established. This study examines the frequency of medical emergencies at the Syracuse University Carrier Dome, a 50,500-seat indoor stadium. Patient volume and level of care at collegiate basketball and football games as well as rock concerts, over a 7-year period were examined and tabulated. This information was analyzed using simple regression and nonparametric statistical methods to determine level of correlation between crowd size and patient volume. These analyses demonstrated no statistically significant increase in patient volume for increasing crowd size for basketball and football events. There was a small but statistically significant increase in patient volume for increasing crowd size for concerts. A comparison of similar crowd size for each of the three events showed that patient frequency is greatest for concerts and smallest for basketball. The study suggests that crowd size alone has only a minor influence on patient volume at any given event. Structuring medical services based solely on expected crowd size and not considering other influences such as event type and duration may give poor results.

  18. Simulation of viscous flows using a multigrid-control volume finite element method

    SciTech Connect

    Hookey, N.A.

    1994-12-31

    This paper discusses a multigrid control volume finite element method (MG CVFEM) for the simulation of viscous fluid flows. The CVFEM is an equal-order primitive variables formulation that avoids spurious solution fields by incorporating an appropriate pressure gradient in the velocity interpolation functions. The resulting set of discretized equations is solved using a coupled equation line solver (CELS) that solves the discretized momentum and continuity equations simultaneously along lines in the calculation domain. The CVFEM has been implemented in the context of both FMV- and V-cycle multigrid algorithms, and preliminary results indicate a five to ten fold reduction in execution times.

  19. Developing and Testing Simulated Occupational Experiences for Distributive Education Students in Rural Communities: Volume III: Training Plans: Final Report.

    ERIC Educational Resources Information Center

    Virginia Polytechnic Inst. and State Univ., Blacksburg.

    Volume 3 of a three volume final report presents prototype job training plans developed as part of a research project which pilot tested a distributive education program for rural schools utilizing a retail store simulation plan. The plans are for 15 entry-level and 15 career-level jobs in seven categories of distributive business (department…

  20. Large eddy simulation of fuel injection and mixing process in a diesel engine

    NASA Astrophysics Data System (ADS)

    Zhou, Lei; Xie, Mao-Zhao; Jia, Ming; Shi, Jun-Rui

    2011-08-01

    The large eddy simulation (LES) approach implemented in the KIVA-3V code and based on one-equation sub-grid turbulent kinetic energy model are employed for numerical computation of diesel sprays in a constant volume vessel and in a Caterpillar 3400 series diesel engine. Computational results are compared with those obtained by an RANS (RNG k- ɛ) model as well as with experimental data. The sensitivity of the LES results to mesh resolution is also discussed. The results show that LES generally provides flow and spray characteristics in better agreement with experimental data than RANS; and that small-scale random vortical structures of the in-cylinder turbulent spray field can be captured by LES. Furthermore, the penetrations of fuel droplets and vapors calculated by LES are larger than the RANS result, and the sub-grid turbulent kinetic energy and sub-grid turbulent viscosity provided by the LES model are evidently less than those calculated by the RANS model. Finally, it is found that the initial swirl significantly affects the spray penetration and the distribution of fuel vapor within the combustion chamber.

  1. Large-Scale Simulations of Realistic Fluidized Bed Reactors using Novel Numerical Methods

    NASA Astrophysics Data System (ADS)

    Capecelatro, Jesse; Desjardins, Olivier; Pepiot, Perrine; National Renewable Energy Lab Collaboration

    2011-11-01

    Turbulent particle-laden flows in the form of fluidized bed reactors display good mixing properties, low pressure drops, and a fairly uniform temperature distribution. Understanding and predicting the flow dynamics within the reactor is necessary for improving the efficiency, and providing technologies for large-scale industrialization. A numerical strategy based on an Eulerian representation of the gas phase and Lagrangian tracking of the particles is developed in the framework of NGA, a high- order fully conservative parallel code tailored for turbulent flows. The particles are accounted for using a point-particle assumption. Once the gas-phase quantities are mapped to the particle location a conservative, implicit diffusion operation smoothes the field. Normal and tangential collisions are handled via soft-sphere model, modified to allow the bed to reach close packing at rest. The pressure drop across the bed is compared with theory to accurately predict the minimum fluidization velocity. 3D simulations of the National Renewable Energy Lab's 4-inch reactor are then conducted. Tens of millions of particles are tracked. The reactor's geometry is modeled using an immersed boundary scheme. Statistics for volume fraction, velocities, bed expansion, and bubble characteristics are analyzed and compared with experimental data.

  2. Large-eddy simulation of turbulent cavitating flow in a micro channel

    SciTech Connect

    Egerer, Christian P. Hickel, Stefan; Schmidt, Steffen J.; Adams, Nikolaus A.

    2014-08-15

    Large-eddy simulations (LES) of cavitating flow of a Diesel-fuel-like fluid in a generic throttle geometry are presented. Two-phase regions are modeled by a parameter-free thermodynamic equilibrium mixture model, and compressibility of the liquid and the liquid-vapor mixture is taken into account. The Adaptive Local Deconvolution Method (ALDM), adapted for cavitating flows, is employed for discretizing the convective terms of the Navier-Stokes equations for the homogeneous mixture. ALDM is a finite-volume-based implicit LES approach that merges physically motivated turbulence modeling and numerical discretization. Validation of the numerical method is performed for a cavitating turbulent mixing layer. Comparisons with experimental data of the throttle flow at two different operating conditions are presented. The LES with the employed cavitation modeling predicts relevant flow and cavitation features accurately within the uncertainty range of the experiment. The turbulence structure of the flow is further analyzed with an emphasis on the interaction between cavitation and coherent motion, and on the statistically averaged-flow evolution.

  3. Modeling Persistent Contrails in a Large Eddy Simulation and a Global Climate Model

    NASA Astrophysics Data System (ADS)

    Naiman, A. D.; Lele, S. K.; Wilkerson, J. T.; Jacobson, M. Z.

    2009-12-01

    Two models of aircraft condensation trail (contrail) evolution have been developed: a high resolution, three-dimensional Large Eddy Simulation (LES) and a simple, low-cost Subgrid Contrail Model (SCM). The LES model was used to simulate contrail development from one second to twenty minutes after emission by the passing aircraft. The LES solves the incompressible Navier-Stokes equations with a Boussinesq approximation for buoyancy forces on an unstructured periodic grid. The numerical scheme uses a second-order finite volume spatial discretization and an implicit fractional-step method for time advancement. Lagrangian contrail particles grow according to a microphysical model of ice deposition and sublimation. The simulation is initialized with the wake of a commercial jet superimposed on a decaying turbulence field. The ambient atmosphere is stable and has a supersaturated relative humidity with respect to ice. Grid resolution is adjusted during the simulation, allowing higher resolution of flow structures than previous studies. We present results of a parametric study in which ambient turbulence levels, vertical wind shear, and aircraft type were varied. We find that higher levels of turbulence and shear promote mixing of aircraft exhaust with supersaturated ambient air, resulting in faster growth of ice and wider dispersion of the exhaust plume. The SCM was developed as a parameterization of contrail dynamics intended for use within a global model that examines the effect of commercial aviation on climate. The SCM provides an analytic solution to the changes in size and shape of a contrail cross-section over time due to global model grid-scale vertical wind shear and turbulence parameters. The model was derived from the physical equations of motion of a plume in a sheared, turbulent environment. Approximations based on physical reasoning and contrail observations allowed these equations to be reduced to simple ordinary differential equations in time with exact

  4. Dust Emissions, Transport, and Deposition Simulated with the NASA Finite-Volume General Circulation Model

    NASA Technical Reports Server (NTRS)

    Colarco, Peter; daSilva, Arlindo; Ginoux, Paul; Chin, Mian; Lin, S.-J.

    2003-01-01

    Mineral dust aerosols have radiative impacts on Earth's atmosphere, have been implicated in local and regional air quality issues, and have been identified as vectors for transporting disease pathogens and bringing mineral nutrients to terrestrial and oceanic ecosystems. We present for the first time dust simulations using online transport and meteorological analysis in the NASA Finite-Volume General Circulation Model (FVGCM). Our dust formulation follows the formulation in the offline Georgia Institute of Technology-Goddard Global Ozone Chemistry Aerosol Radiation and Transport Model (GOCART) using a topographical source for dust emissions. We compare results of the FVGCM simulations with GOCART, as well as with in situ and remotely sensed observations. Additionally, we estimate budgets of dust emission and transport into various regions.

  5. Role for Lower Extremity Interstitial Fluid Volume Changes in the Development of Orthostasis after Simulated Microgravity

    NASA Technical Reports Server (NTRS)

    Platts, Steven H.; Summers, Richard L.; Martin, David S.; Meck, Janice V.; Coleman, Thomas G.

    2007-01-01

    vein diameter and stroke volume upon tilting in contrast to the observations made before bed rest (54 vs 23% respectively). Compliance in the calf increased by an average of 36% by day 27 of bedrest. A systems analysis using a computer model of cardiovascular physiology suggests that microgravity induced interstitial volume depletion results in an accentuation of venous blood volume sequestration and is the initiating event in reentry orthostasis. This hypothesis was tested in volunteer subjects using a ground-based spaceflight analog model that simulated the body fluid redistribution induced by microgravity exposure. Measurements of changes in the interstitial spaces and observed responses of the anterior tibial vein with tilt, together with the increase in calf compliance, were consistent with our proposed mechanism for the initiation of postflight orthostasis often seen in astronauts.

  6. Dosimetric comparison of split field and fixed jaw techniques for large IMRT target volumes in the head and neck.

    PubMed

    Srivastava, Shiv P; Das, Indra J; Kumar, Arvind; Johnstone, Peter A S

    2011-01-01

    Some treatment planning systems (TPSs), when used for large-field (>14 cm) intensity-modulated radiation therapy (IMRT), create split fields that produce excessive multiple-leaf collimator segments, match-line dose inhomogeneity, and higher treatment times than nonsplit fields. A new method using a fixed-jaw technique (FJT) forces the jaw to stay at a fixed position during optimization and is proposed to reduce problems associated with split fields. Dosimetric comparisons between split-field technique (SFT) and FJT used for IMRT treatment is presented. Five patients with head and neck malignancies and regional target volumes were studied and compared with both techniques. Treatment planning was performed on an Eclipse TPS using beam data generated for Varian 2100C linear accelerator. A standard beam arrangement consisting of nine coplanar fields, equally spaced, was used in both techniques. Institutional dose-volume constraints used in head and neck cancer were kept the same for both techniques. The dosimetric coverage for the target volumes between SFT and FJT for head and neck IMRT plan is identical within ± 1% up to 90% dose. Similarly, the organs at risk (OARs) have dose-volume coverage nearly identical for all patients. When the total monitor unit (MU) and segments were analyzed, SFT produces statistically significant higher segments (17.3 ± 6.3%) and higher MU (13.7 ± 4.4%) than the FJT. There is no match line in FJT and hence dose uniformity in the target volume is superior to the SFT. Dosimetrically, SFT and FJT are similar for dose-volume coverage; however, the FJT method provides better logistics, lower MU, shorter treatment time, and better dose uniformity. The number of segments and MU also has been correlated with the whole body radiation dose with long-term complications. Thus, FJT should be the preferred option over SFT for large target volumes.

  7. Dosimetric Comparison of Split Field and Fixed Jaw Techniques for Large IMRT Target Volumes in the Head and Neck

    SciTech Connect

    Srivastava, Shiv P.; Das, Indra J.; Kumar, Arvind; Johnstone, Peter A.S.

    2011-04-01

    Some treatment planning systems (TPSs), when used for large-field (>14 cm) intensity-modulated radiation therapy (IMRT), create split fields that produce excessive multiple-leaf collimator segments, match-line dose inhomogeneity, and higher treatment times than nonsplit fields. A new method using a fixed-jaw technique (FJT) forces the jaw to stay at a fixed position during optimization and is proposed to reduce problems associated with split fields. Dosimetric comparisons between split-field technique (SFT) and FJT used for IMRT treatment is presented. Five patients with head and neck malignancies and regional target volumes were studied and compared with both techniques. Treatment planning was performed on an Eclipse TPS using beam data generated for Varian 2100C linear accelerator. A standard beam arrangement consisting of nine coplanar fields, equally spaced, was used in both techniques. Institutional dose-volume constraints used in head and neck cancer were kept the same for both techniques. The dosimetric coverage for the target volumes between SFT and FJT for head and neck IMRT plan is identical within {+-}1% up to 90% dose. Similarly, the organs at risk (OARs) have dose-volume coverage nearly identical for all patients. When the total monitor unit (MU) and segments were analyzed, SFT produces statistically significant higher segments (17.3 {+-} 6.3%) and higher MU (13.7 {+-} 4.4%) than the FJT. There is no match line in FJT and hence dose uniformity in the target volume is superior to the SFT. Dosimetrically, SFT and FJT are similar for dose-volume coverage; however, the FJT method provides better logistics, lower MU, shorter treatment time, and better dose uniformity. The number of segments and MU also has been correlated with the whole body radiation dose with long-term complications. Thus, FJT should be the preferred option over SFT for large target volumes.

  8. Associations Between IQ, Total and Regional Brain Volumes and Demography in a Large Normative Sample of Healthy Children and Adolescents

    PubMed Central

    Lange, Nicholas; Froimowitz, Michael P.; Bigler, Erin D.; Lainhart, Janet E.

    2010-01-01

    In the course of efforts to establish quantitative MRI-based norms for healthy brain development (Brain Development Cooperative Group, 2006), previously unreported associations of parental education and temporal and frontal lobe volumes with full scale IQ and its verbal and performance subscales were discovered. Our findings were derived from the largest, most representative MRI sample to date of healthy children and adolescents, ages 4 years 10 months to 18 years 4 months. We first find that parental education has a strong association with IQ in children that is not mediated by total or regional brain volumes. Second, we find that our observed associations between temporal gray matter, temporal white matter and frontal white matter volumes with full scale IQ, between 0.14 to 0.27 in children and adolescents, are due in large part to their correlations with performance IQ and not verbal IQ. The volumes of other lobar gray and white matter, subcortical gray matter (thalamus, caudate nucleus, putamen and globus pallidus), cerebellum and brainstem do not contribute significantly to IQ variation. Third, we find that head circumference is an insufficient index of cerebral volume in typically developing older children and adolescents. The relations between total and regional brain volumes and IQ can best be discerned when additional variables known to be associated with IQ, especially parental education and other demographic measures, are considered concurrently. PMID:20446134

  9. Large-eddy simulation of turbulent channel flows with conservative IDO scheme

    NASA Astrophysics Data System (ADS)

    Onodera, Naoyuki; Aoki, Takayuki; Kobayashi, Hiromichi

    2011-06-01

    The resolution of a numerical scheme in both physical and Fourier spaces is one of the most important requirements to calculate turbulent flows. A conservative form of the interpolated differential operator (IDO-CF) scheme is a multi-moment Eulerian scheme in which point values and integrated average values are separately defined in one cell. Since the IDO-CF scheme using high-order interpolation functions is constructed with compact stencils, the boundary conditions are able to be treated as easy as the 2nd-order finite difference method (FDM). It is unique that the first-order spatial derivative of the point value is derived from the interpolation function with 4th-order accuracy and the volume averaged value is based on the exact finite volume formulation, so that the IDO-CF scheme has higher spectral resolution than conventional FDMs with 4th-order accuracy. The computational cost to calculate the first-order spatial derivative with non-uniform grid spacing is one-third of the 4th-order FDM. For a large-eddy simulation (LES), we use the coherent structure model (CSM) in which the model coefficient is locally obtained from a turbulent structure extracted from a second invariant of the velocity gradient tensor, and the model coefficient correctly satisfies asymptotic behaviors to walls. The results of the IDO-CF scheme with the CSM for turbulent channel flows are compared to the FDM with the CSM and dynamic Smagorinsky model as well as the direct numerical simulation (DNS) by Moser et al. Adding the sub-grid scale stress tensor of LES to the IDO-CF scheme improves the profile of the mean velocity in comparison with an implicit eddy viscosity of the IDO-CF upwind scheme. The IDO-CF scheme with the CSM gives better turbulent intensities than conventional FDMs with the same number of grid points. The turbulent statistics calculated by IDO-CF scheme are in good agreement with the DNS at the various values of Reynolds number Reτ = 180,395, and 590. It is found that

  10. A computer simulation of free-volume distributions and related structural properties in a model lipid bilayer.

    PubMed Central

    Xiang, T X

    1993-01-01

    A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390

  11. Fully automated circulating tumor cell isolation platform with large-volume capacity based on lab-on-a-disc.

    PubMed

    Park, Jong-Myeon; Kim, Minseok S; Moon, Hui-Sung; Yoo, Chang Eun; Park, Donghyun; Kim, Yeon Jeong; Han, Kyung-Yeon; Lee, June-Young; Oh, Jin Ho; Kim, Sun Soo; Park, Woong-Yang; Lee, Won-Yong; Huh, Nam

    2014-04-15

    Full automation with high purity for circulating tumor cell (CTC) isolation has been regarded as a key goal to make CTC analysis a "bench-to-bedside" technology. Here, we have developed a novel centrifugal microfluidic platform that can isolate the rare cells from a large volume of whole blood. To isolate CTCs from whole blood, we introduce a disc device having the biggest sample capacity as well as manipulating blood cells for the first time. The fully automated disc platform could handle 5 mL of blood by designing the blood chamber having a triangular obstacle structure (TOS) with lateral direction. To guarantee high purity that enables molecular analysis with the rare cells, CTCs were bound to the microbeads covered with anti-EpCAM to discriminate density between CTCs and blood cells and the CTCs being heavier than blood cells were only settled under a density gradient medium (DGM) layer. To understand the movement of CTCs under centrifugal force, we performed computational fluid dynamics simulation and found that their major trajectories were the boundary walls of the DGM chamber, thereby optimizing the chamber design. After whole blood was inserted into the blood chamber of the disc platform, size- and density-amplified cancer cells were isolated within 78 min, with minimal contamination as much as approximately 12 leukocytes per milliliter. As a model of molecular analysis toward personalized cancer treatment, we performed epidermal growth factor receptor (EGFR) mutation analysis with HCC827 lung cancer cells and the isolated cells were then successfully detected for the mutation by PCR clamping and direct sequencing.

  12. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The

  13. Slug-sizing/slug-volume prediction: State of the art review and simulation

    SciTech Connect

    Burke, N.E.; Kashou, S.F.

    1996-08-01

    Slug flow is a flow pattern commonly encountered in offshore multiphase flowlines. It is characterized by an alternate flow of liquid slugs and gas pockets, resulting in an unsteady hydrodynamic behavior. All important design variables, such as slug length and slug frequency, liquid holdup, and pressure drop, vary with time and this makes the prediction of slug flow characteristics both difficult and challenging. This paper reviews the state of the art methods in slug-catcher sizing and slug-volume predictions. In addition, history matching of measured slug flow data is performed using the OLGA transient simulator. This paper reviews the design factors that impact slug-catcher sizing during steady state, during transient, during pigging, and during operations under a process-control system. The slug-tracking option of the simulator is applied to predict the slug length and the slug volume during a field operation. This paper will also comment on the performance of common empirical slug-prediction correlations.

  14. RF systems in space. Volume 1: Space antennas frequency (SARF) simulation

    NASA Astrophysics Data System (ADS)

    Ludwig, A. C.; Freeman, J. R.; Capp, J. D.

    1983-04-01

    The main objective of this effort was to develop a computer based analytical capability for simulating the RF performance of large space-based radar (SBR) systems. The model is capable of simulating corporate and space fed aperture. The model also can simulate multibeam feeds, cluster/point feeds, corporate feed and various aperture distributions. The simulation is capable of accepting Draper Labs structural data and antenna current data from Atlantic Research Corporation's (ARC) First Approximation Methods (FAM) and Higher Approximation Methods (HAM) models. In addition there is a routine to input various apertures surface distortions which causes the elements in the array to be displaced from the ideal location on a planar lattice. These were analyses looking at calibration/compensation techniques for large aperture space radars. Passive, space fed lens SBR designs were investigated. The survivability of an SBR system was analyzed. The design of ground based SBR validation experiments for large aperture SBR concepts were investigated. SBR designs were investigated for ground target detection.

  15. Large-eddy simulation of circular cylinder flow at subcritical Reynolds number: Turbulent wake and sound radiation

    NASA Astrophysics Data System (ADS)

    Guo, Li; Zhang, Xing; He, Guowei

    2016-02-01

    The flows past a circular cylinder at Reynolds number 3900 are simulated using large-eddy simulation (LES) and the far-field sound is calculated from the LES results. A low dissipation energy-conserving finite volume scheme is used to discretize the incompressible Navier-Stokes equations. The dynamic global coefficient version of the Vreman's subgrid scale (SGS) model is used to compute the sub-grid stresses. Curle's integral of Lighthill's acoustic analogy is used to extract the sound radiated from the cylinder. The profiles of mean velocity and turbulent fluctuations obtained are consistent with the previous experimental and computational results. The sound radiation at far field exhibits the characteristic of a dipole and directivity. The sound spectra display the -5/3 power law. It is shown that Vreman's SGS model in company with dynamic procedure is suitable for LES of turbulence generated noise.

  16. Large Volume Self-Organization of Polymer/Nanoparticle Hybrids with Millimeter Scale Grain Sizes using Brush Block Copolymers

    NASA Astrophysics Data System (ADS)

    Song, Dongpo; Watkins, James

    The lack of sufficient long-range order in self-assembled nanostructures is a bottleneck for many nanotechnology applications. In this work, we report that exceptionally large volume of highly ordered arrays (single grains) on the order of millimeters in scale can be rapidly created through a unique innate guiding mechanism of brush block copolymers (BBCPs). The grain volume is over 1 billion times larger relative to that of typical self-assembled linear BCPs (LBCPs). The use of strong interactions between nanoparticles (NPs) and BBCPs enables the high loadings of functional materials, up to 76 wt% (46 vol%) in the target domain, while maintaining excellent long-range order. Overall this work provides a simple route to precisely control the spatial orientation of functionalities at nanometer length scales over macroscopic volumes, thereby enabling the production of hybrid materials for many important applications.

  17. Systems and methods for the detection of low-level harmful substances in a large volume of fluid

    DOEpatents

    Carpenter, Michael V.; Roybal, Lyle G.; Lindquist, Alan; Gallardo, Vincente

    2016-03-15

    A method and device for the detection of low-level harmful substances in a large volume of fluid comprising using a concentrator system to produce a retentate and analyzing the retentate for the presence of at least one harmful substance. The concentrator system performs a method comprising pumping at least 10 liters of fluid from a sample source through a filter. While pumping, the concentrator system diverts retentate from the filter into a container. The concentrator system also recirculates at least part of the retentate in the container again through the filter. The concentrator system controls the speed of the pump with a control system thereby maintaining a fluid pressure less than 25 psi during the pumping of the fluid; monitors the quantity of retentate within the container with a control system, and maintains a reduced volume level of retentate and a target volume of retentate.

  18. Dual-domain microchip-based process for volume reduction solid phase extraction of nucleic acids from dilute, large volume biological samples.

    PubMed

    Reedy, Carmen R; Hagan, Kristin A; Strachan, Briony C; Higginson, Joshua J; Bienvenue, Joan M; Greenspoon, Susan A; Ferrance, Jerome P; Landers, James P

    2010-07-01

    A microfluidic device was developed to carry out integrated volume reduction and purification of nucleic acids from dilute, large volume biological samples commonly encountered in forensic genetic analysis. The dual-phase device seamlessly integrates two orthogonal solid-phase extraction (SPE) processes, a silica solid phase using chaotrope-driven binding and an ion exchange phase using totally aqueous chemistry (chitosan phase), providing the unique capability of removing polymerase chain reaction (PCR) inhibitors used in silica-based extractions (guanidine and isopropanol). Nucleic acids from a large volume sample are shown to undergo a substantial volume reduction on the silica phase, followed by a more stringent extraction on the chitosan phase. The key to interfacing the two steps is mixing of the eluted nucleic acids from the first phase with loading buffer which is facilitated by flow-mediated mixing over a herringbone mixing region in the device. The complete aqueous chemistry associated with the second purification step yields a highly concentrated PCR-ready eluate of nucleic acids devoid of PCR inhibitors that are reagent-based (isopropanol) and sample-based (indigo dye), both of which are shown to be successfully removed using the dual-phase device but not by the traditional microfluidic SPE (muSPE). The utility of the device for purifying DNA was demonstrated with dilute whole blood, dilute semen, a semen stain, and a blood sample inhibited with indigo dye, with the resultant DNA from all shown to be PCR amplifiable. The same samples purified using muSPE were not all PCR amplifiable due to a smaller concentration of the DNA and the lack of PCR-compatible aqueous chemistry in the extraction method. The utility of the device for the purification of RNA was also demonstrated, by the extraction of RNA from a dilute semen sample, with the resulting RNA amplified using reverse transcription (RT)-PCR. The vrSPE-SPE device reliably yields a volume reduction for

  19. Development of deployable structures for large space platforms. Volume 2: Design development

    NASA Technical Reports Server (NTRS)

    Greenberg, H. S.

    1983-01-01

    Design evolution, test article design, test article mass properties, and structural analysis of deployable platform systems are discussed. Orbit transfer vehicle (OTV) hangar development, OTV hangar concept selection, and manned module development are discussed. Deployable platform systems requirements, material data base, technology development needs, concept selection and deployable volume enclosures are also discussed.

  20. COMPARISON OF TWO DIFFERENT SOLID PHASE EXTRACTION/LARGE VOLUME INJECTION PROCEDURES FOR METHOD 8270

    EPA Science Inventory

    Two solid phase (SPE) and one traditional continuous liquid-liquid extraction method are compared for analysis of Method 8270 SVOCs. Productivity parameters include data quality, sample volume, analysis time and solvent waste.

    One SPE system, unique in the U.S., uses aut...